I built a system that required multiple services to concurrently work independently yet needed to rely on an input from other running services and make decisions accordingly , building the app around events solved many many issues and made the app resilient, modular, and easy to maintain and also learn from events.
I really enjoyed Bobby Calderwood in this episode, I'm currently building a event sourcing system and it gave me a lot to think through. I remember reading about the decider pattern but I didn't really internalize it but Bobby's way of explaining made a lot of things click for me.
isn't decider pattern just an input validator before emitting a state change event or am I missing the nuance here? also the idea of the implementing pure functional decider pattern sounds very cumbersome. I imagine if the business logic depends on data from outside sources e.g env variable, feature flag, db, etc. they all have to be modeled as 'react' events that need to be executed as a precursor of 'evolve' event (the actual business logic)...
I have used transaction files (what was the change) in OOP code and every other type of language for more than 40 years. Any code base can adopt an event style without any difficulty. Forcing all changes to be recorded even when there is no purpose, is similar to programming as a religion. I program almost exclusively in an event style but I also just use the current state whenever I don’t need to track those changes.
Great podcast. Many thanks for it. As an early adopter of DDD and event sourcing (ES), I can only confirm most or all of what was said in the video, about the benefits of ES. We were early adopters of Greg‘s event store DB and I have written my own various implementations of an event DB on top of Oracle, sql server and even the file system. It’s been a lot of fun. But luckily you guys didn’t stop just at ES but touched (although too briefly) the design part. Yes, indeed, EventModeling is a great methodology to model a complex system and to lower the cognitive load by concentrating on the flow of information and get that predictability back.
## Main takeaways - Event sourcing is at it's core a append-only file with an I/O handler like Kafka. - The immutable log is the state as a stream of events, accessed though a left fold, accumulating the state changes and producing the current state. - While both message passing and event handling pass data between components, message passing involves sending messages to trigger actions or communicate between components-event passing only captures state changes. - There is no query or usage pattern opposed, the behavior of a system can be derived from it's output (although often still implemented for optimization). - There is still techniques like optimistic concurrency control needed for managing faults. - We still need a write-ahead log + some mid-tear state manager, thats building indexes for intermediate states.
I nearly clicked off when I heard "We're standing at the dawn of an AI revolution" (we're standing on the brink of the AI bubble bursting) - but I quite like the idea of a small-scale LLM that runs locally and makes _informed, verifiable_ statements based on your own event streams
In the customer journey domain with online sales, customers demand certain events. And they're not getting them. Knowledge in this area is worth billions.
Regarding domain blindness around the 12:00 minute mark, the domain blindness happens because keeping an event ledger without using event sourcing (with separate read/write models) is a big pain in the butt. lol
Events are mainly useful for multiple processes. They add unnecessary complexity to something that can run in one process. But for recording multimedia, for example, I have one thread handling actual buffering and writing to disk. And another thread drawing the VU meters. An event triggers 10 times per second to update the sound level value. The actual drawing of the display is deferred the background process so it synchronizes with the screen drawing, does not cause latency, or "tearing". Without an event, the logic becomes error-prone. And it's possible to drop audio frames or freeze the application waiting for some unforeseen condition such as a bug in the meter display. With an event, the thread running the display can freeze but recording can still continue without losing data.
Event-driven software can be hard to write. To be honest, last time I tried was in C++ in win nt 4 for an amusement park ride. Basically, it was a PLC - PC data translation program. Events were not expectable.
The industry is filled with interesting ebs and flows. Millions of dollars chasing the next big thing in enterprise software, that is, building tomorrow's painful legacy, one exciting relevation (aka. dependency) at a time. Titan's "betting the company" on ideas that fizzle only to switch focus, ultimately contradicting the depth of the bet in the firstplace. Rise repeat :)
Yeah, I've some sympathy with that argument. We do have a propellerhead tendency in our industry, jumping at fads and silver bullets in a heartbeat. Remember when Pair Programming and TDD were going to usher in a new golden age that made everything that came before look like the wild west? Where's all that hype now? But at the same time, we're also surprisingly intransigent. It's nearly 60 years since Tony Hoare committed his Billion Dollar Mistake and we're still inventing languages with null pointer exceptions. We've had networking for even longer and yet writing code that runs on multiple computers is still significantly harder than a single machine. Perl is in the build stack of nearly everything and COBOL is probably still at the heart of your bank, despite the fact that nearly no-one would make those choices today. We're both moving too fast and too slow. Which, on reflection, fits with our industry's age. Industrial computing is older than multiple careers, but still shorter than a single lifetime. So we're continually seeing people pushing for 'the new way' while at the same time being young enough to remember when you could run an entire accounting department on 64kb. Our memory is both too short and too long. Maybe one day we'll have it all figured out, but we're definitely not there yet. I keep searching for new ideas, experimenting, and adopting the most promising ones. Cautiously. 🙂
@@DeveloperVoices so true... I suspect a lot of it is due to responding to change as things move so quickly, with so much positive feedback. Ultimately it results in dichotomy and contradictions, which is what makes it all so painful and fun :)
IMO it actually makes things simpler. With a properly designed system you could treat the log as a tree, and just delete the root account "node"/event and then recursively delete all "nodes"/events that referenced it. Obviously it gets harder when things reference multiple "roots", but it's not an impossible problem to solve, and you would have to handle that with any other system as well.
Usually you encrypt the sensitive data with a key (as part of the event body) and if you are requested to delete whatever you simply delete the key. The event is still there (with all the meta information), but not the sensitive parts.
Events are obviously a good thing to listen for in architecture as everything that happens spontaneously- is an event. Still looks like ECS is just better even for events. It's just a tag added to an entity that could hold any data and that data (components) could get extended in a pipeline of systems execution with another components. And so OOP with it's jumps in RAM not getting fixed by events. But there is no way to get out if OOP domination. Technically events are making your program even worse in terms of performance as decoupling in OOP means more pointers in RAM placed in random order. It sucks to the point where your performance testing is never be valid as each time you run the test - everything is placed randomly again and so the program could run much faster or much slower all the sudden.
How do these systems cope with when you really need to delete something for legal reasons? Like a user requests their data to be deleted, but that should of course not remove the views that user gave to a video. Or differently said, when you can't keep that of a detailed log for legal reasons can you still use this?
This is an excellent question, in my experience there is really no way of doing it without modifying the underlaying events. So you'd need to anonymize the data in an (for example) addCustomerDataToCheckout event. Bit of a pita.
A system I've worked in had user IDs or user data replaced with claim check IDs and a specified correlation level - none, per transaction, session, or global. Those claim check IDs pointed to a mapping table that pointed at e.g. an encrypted store elsewhere. You deleted user data from the encrypted store and the connection in the mapping table and the events/messages were unchanged as they were just e.g. an event type of address changed and a pair of claim checks to the user ID and the user input. Overkill for some cases but worked well there.
I built a system that required multiple services to concurrently work independently yet needed to rely on an input from other running services and make decisions accordingly , building the app around events solved many many issues and made the app resilient, modular, and easy to maintain and also learn from events.
I really enjoyed Bobby Calderwood in this episode, I'm currently building a event sourcing system and it gave me a lot to think through. I remember reading about the decider pattern but I didn't really internalize it but Bobby's way of explaining made a lot of things click for me.
isn't decider pattern just an input validator before emitting a state change event or am I missing the nuance here?
also the idea of the implementing pure functional decider pattern sounds very cumbersome. I imagine if the business logic depends on data from outside sources e.g env variable, feature flag, db, etc. they all have to be modeled as 'react' events that need to be executed as a precursor of 'evolve' event (the actual business logic)...
Great interview! Appreciate the enthusiasm of the guest and coverage.
I'd love that follow up episode
I have used transaction files (what was the change) in OOP code and every other type of language for more than 40 years. Any code base can adopt an event style without any difficulty. Forcing all changes to be recorded even when there is no purpose, is similar to programming as a religion. I program almost exclusively in an event style but I also just use the current state whenever I don’t need to track those changes.
Great podcast. Many thanks for it. As an early adopter of DDD and event sourcing (ES), I can only confirm most or all of what was said in the video, about the benefits of ES. We were early adopters of Greg‘s event store DB and I have written my own various implementations of an event DB on top of Oracle, sql server and even the file system. It’s been a lot of fun.
But luckily you guys didn’t stop just at ES but touched (although too briefly) the design part. Yes, indeed, EventModeling is a great methodology to model a complex system and to lower the cognitive load by concentrating on the flow of information and get that predictability back.
## Main takeaways
- Event sourcing is at it's core a append-only file with an I/O handler like Kafka.
- The immutable log is the state as a stream of events, accessed though a left fold, accumulating the state changes and producing the current state.
- While both message passing and event handling pass data between components, message passing involves sending messages to trigger actions or communicate between components-event passing only captures state changes.
- There is no query or usage pattern opposed, the behavior of a system can be derived from it's output (although often still implemented for optimization).
- There is still techniques like optimistic concurrency control needed for managing faults.
- We still need a write-ahead log + some mid-tear state manager, thats building indexes for intermediate states.
Wow there was a lot packed into this one. Pretty challenging stuff, but I'm learning a lot here. Your channel deserves to be much bigger than it is :)
Dudes are basically discussing multiplayer game networking and it's"prediction" problem.
Great content as always, thank you!
I nearly clicked off when I heard "We're standing at the dawn of an AI revolution" (we're standing on the brink of the AI bubble bursting) - but I quite like the idea of a small-scale LLM that runs locally and makes _informed, verifiable_ statements based on your own event streams
In the customer journey domain with online sales, customers demand certain events. And they're not getting them. Knowledge in this area is worth billions.
Regarding domain blindness around the 12:00 minute mark, the domain blindness happens because keeping an event ledger without using event sourcing (with separate read/write models) is a big pain in the butt. lol
This was great!
Ignore the content, I like your voice
Aw, thanks. :-)
Events are mainly useful for multiple processes. They add unnecessary complexity to something that can run in one process. But for recording multimedia, for example, I have one thread handling actual buffering and writing to disk. And another thread drawing the VU meters. An event triggers 10 times per second to update the sound level value. The actual drawing of the display is deferred the background process so it synchronizes with the screen drawing, does not cause latency, or "tearing". Without an event, the logic becomes error-prone. And it's possible to drop audio frames or freeze the application waiting for some unforeseen condition such as a bug in the meter display. With an event, the thread running the display can freeze but recording can still continue without losing data.
Event-driven software can be hard to write. To be honest, last time I tried was in C++ in win nt 4 for an amusement park ride. Basically, it was a PLC - PC data translation program. Events were not expectable.
The industry is filled with interesting ebs and flows. Millions of dollars chasing the next big thing in enterprise software, that is, building tomorrow's painful legacy, one exciting relevation (aka. dependency) at a time. Titan's "betting the company" on ideas that fizzle only to switch focus, ultimately contradicting the depth of the bet in the firstplace. Rise repeat :)
Yeah, I've some sympathy with that argument. We do have a propellerhead tendency in our industry, jumping at fads and silver bullets in a heartbeat. Remember when Pair Programming and TDD were going to usher in a new golden age that made everything that came before look like the wild west? Where's all that hype now?
But at the same time, we're also surprisingly intransigent. It's nearly 60 years since Tony Hoare committed his Billion Dollar Mistake and we're still inventing languages with null pointer exceptions. We've had networking for even longer and yet writing code that runs on multiple computers is still significantly harder than a single machine. Perl is in the build stack of nearly everything and COBOL is probably still at the heart of your bank, despite the fact that nearly no-one would make those choices today.
We're both moving too fast and too slow. Which, on reflection, fits with our industry's age. Industrial computing is older than multiple careers, but still shorter than a single lifetime. So we're continually seeing people pushing for 'the new way' while at the same time being young enough to remember when you could run an entire accounting department on 64kb. Our memory is both too short and too long.
Maybe one day we'll have it all figured out, but we're definitely not there yet. I keep searching for new ideas, experimenting, and adopting the most promising ones. Cautiously. 🙂
@@DeveloperVoices so true... I suspect a lot of it is due to responding to change as things move so quickly, with so much positive feedback. Ultimately it results in dichotomy and contradictions, which is what makes it all so painful and fun :)
I'm curious as to how event sourcing deals with GDPR - it seems like that would be particularly tricky if the whole point is to have an immutable log.
IMO it actually makes things simpler. With a properly designed system you could treat the log as a tree, and just delete the root account "node"/event and then recursively delete all "nodes"/events that referenced it. Obviously it gets harder when things reference multiple "roots", but it's not an impossible problem to solve, and you would have to handle that with any other system as well.
Usually you encrypt the sensitive data with a key (as part of the event body) and if you are requested to delete whatever you simply delete the key. The event is still there (with all the meta information), but not the sensitive parts.
Events are obviously a good thing to listen for in architecture as everything that happens spontaneously- is an event. Still looks like ECS is just better even for events. It's just a tag added to an entity that could hold any data and that data (components) could get extended in a pipeline of systems execution with another components. And so OOP with it's jumps in RAM not getting fixed by events. But there is no way to get out if OOP domination. Technically events are making your program even worse in terms of performance as decoupling in OOP means more pointers in RAM placed in random order. It sucks to the point where your performance testing is never be valid as each time you run the test - everything is placed randomly again and so the program could run much faster or much slower all the sudden.
when is come out the video about the Odin Programming Language ? hehehe, sorry, just asking
Hehe, fair question. Either next week (10th Jan) or the week after. 🙂
How do these systems cope with when you really need to delete something for legal reasons? Like a user requests their data to be deleted, but that should of course not remove the views that user gave to a video. Or differently said, when you can't keep that of a detailed log for legal reasons can you still use this?
This is an excellent question, in my experience there is really no way of doing it without modifying the underlaying events. So you'd need to anonymize the data in an (for example) addCustomerDataToCheckout event. Bit of a pita.
A system I've worked in had user IDs or user data replaced with claim check IDs and a specified correlation level - none, per transaction, session, or global. Those claim check IDs pointed to a mapping table that pointed at e.g. an encrypted store elsewhere. You deleted user data from the encrypted store and the connection in the mapping table and the events/messages were unchanged as they were just e.g. an event type of address changed and a pair of claim checks to the user ID and the user input. Overkill for some cases but worked well there.
This is pretty interesting, but i would never want to work at a company micromanaging to the extent he seems excited for, no way in hell.