Daniel Tammadge
Daniel Tammadge
  • Видео 27
  • Просмотров 81 097
Amazon SQS Explained: When to Use It & When to Look Elsewhere
Unlock the full potential of Amazon SQS for your cloud architecture in this latest video. Dive deep into the specific scenarios where Amazon Simple Queuing Service (SQS) shines, its comparison with Amazon Simple Notification Service (SNS), Amazon Kinesis, Apache Kafka, and how it can be synergistically used with Amazon EventBridge. Whether you’re looking to enhance microservice communication, optimize performance during traffic surges, or implement delayed job execution, this video provides you with the insights you need to make informed decisions. Plus, we’ll explore situations where other services might be a better fit and how to decide between SQS, SNS, Kinesis, Kafka, and EventBridge ...
Просмотров: 129

Видео

Mastering Message Queuing: AWS SQS Introduction for Architects & Developers
Просмотров 1978 месяцев назад
#amazonsqs #danieltammadge Explore the world of AWS messaging services with this comprehensive guide to Amazon Simple Queue Service (SQS). Dive deep into how SQS empowers microservices, distributed systems, and serverless architectures with robust, scalable messaging solutions. Learn about the asynchronous communication model, message lifecycle, queue types, and the rich features SQS offers. Wh...
Using both Orchestration & Choreography in a serverless Event-Driven AWS & Kafka system
Просмотров 355Год назад
In this video, we discuss the trend for event-driven architecture and microservices and explore the advantages and disadvantages of using orchestration versus choreography in serverless event-driven systems. We describe the benefits of choreography, including promoting loose coupling between services, allowing services to evolve independently, improving scalability and performance, promoting si...
12 factor applications vs microservices | #softwaredesign 101
Просмотров 224Год назад
This video discusses the differences between 12 factor apps and #microservices, their principles and benefits. It also explores how the principles of 12 factor apps can be applied when building microservices and how microservices can address the challenges of monolithic applications. #softwaredesign #danieltammadge
Event-Driven Architecture: The Five Patterns Explained
Просмотров 2 тыс.Год назад
In this video, I'll explain what are the five types of Event-Driven Architecture, Event-Notification, Event-Carried State Transfer, CQRS, Event-Sourcing and Event-Streaming. After this video go watch Message Brokers vs Event-Brokers ruclips.net/video/7QCRaHKl7sg/видео.html Also the following book is a MUST read amzn.to/3mDuBHD Designing Data-Intensive Applications: The Big Ideas Behind Reliable...
Serverless Event-Driven Architecture. Database-first with HTTP/REST APIs, #DynamoDB and #CDC
Просмотров 293Год назад
In this Video clip, I will show you a solution on how to use AWS API Gateway to proxy to DynamoDB to ensure no data loss when publishing events to an event stream. And use web sockets to push updates to connected clients. Watch the full video ruclips.net/video/xRDU-LbLftU/видео.html where I compare database-first and event-first using AWS services. #awsapigateway #Kineses #serverless #softwared...
Serverless Event-Driven architecture using AWS #serverless
Просмотров 496Год назад
In this video, I explain how to combine request-driven and event-driven architectures (synchronous and asynchronous communication) to store and display real-time time-stamped data from an IOT device on a front-end application. I will explain and show two event publishing patterns, database-first and event-first, to get the data payloads stored in AWS DynamoDB and Amazon Kinesis Data Streams usi...
Beginners why you need to have a Schema Registry. Event Driven Architecture & Kafka basics
Просмотров 1,7 тыс.2 года назад
Most people think that schema is only important for database systems, but nothing could be further from the truth. In this video, I'll show you why not having a schema can be a big mistake in event-driven systems. #eventdrivenarchitecture #systemdesign #danieltammadge I have gained my knowledge by watching talks and reading many articles, but more importantly designing, and running event-driven...
How not to lose events when publishing to Broker or Topic? | EDA basics
Просмотров 1,6 тыс.2 года назад
Most developers are familiar with the publish/subscribe pattern, but what about when you need to guarantee that a message is published? In this video I introduce you to the Outbox Pattern - a guaranteed event publishing pattern. 0:14 What is the outbox pattern? 0:50 The outbox approach guarantees at least once event publishing, because? 1:15 Be careful with at least once publishing 1:28 Why ide...
Do not use this event publishing pattern | Event-Driven Architecture
Просмотров 8032 года назад
Event write-aside is one of many event publishing approaches. In this video, you will learn what event write-aside is and what are the drawbacks of using the approach when designing event-driven microservices #microservices, #eventdrivenarchitecture #danieltammadge #softwaredesign I have gained my knowledge by watching talks and reading many articles, but more importantly designing, and running...
Apache Kafka: Keeping the order of events when retrying due to failure
Просмотров 7 тыс.2 года назад
How to handle message retries & failures in event-driven systems? While keeping event ordering? In this video, I explain how you can approach this problem. This video is part two to ruclips.net/video/GTHaVuThj_0/видео.html #eventdrivenarchitecture #danieltammadge #ApacheKafka #microservices
The difference between Messages Brokers, Event Brokers & Event Streams (Kafka vs message brokers)
Просмотров 5 тыс.2 года назад
What are the differences between message brokers and event-brokers and event streaming platforms? And their pros and cons in using them in event-driven architecture. The following book is a MUST read amzn.to/3mDuBHD Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems (this is a book which I'd recommended for every technical person and has had...
How to handle message retries & failures in event driven-systems? Handling retires with Kafka?
Просмотров 31 тыс.3 года назад
How to handle message retries & failures in event driven-systems? Handling retires with Kafka?
Tips on how to communicate as a Solution Architect | Solution Architecture 101
Просмотров 8653 года назад
Tips on how to communicate as a Solution Architect | Solution Architecture 101
What is Orchestration? How to implement it using Apache Kafka and 3 tips to prevent message lose
Просмотров 3,5 тыс.3 года назад
What is Orchestration? How to implement it using Apache Kafka and 3 tips to prevent message lose
How to trigger specific microservices after events are processed? Apache Kafka. EDA 101
Просмотров 1,4 тыс.3 года назад
How to trigger specific microservices after events are processed? Apache Kafka. EDA 101
Event-Driven Architecture | Event-Driven VS Request-Driven Architecture, When Not To Use Events
Просмотров 15 тыс.3 года назад
Event-Driven Architecture | Event-Driven VS Request-Driven Architecture, When Not To Use Events
Why you need to develop API. Microservice architecture 101
Просмотров 863 года назад
Why you need to develop API. Microservice architecture 101
Building REST APIs follow these rules to create awesome APIS
Просмотров 4163 года назад
Building REST APIs follow these rules to create awesome APIS
What is an API, and how do APIs enable automation and scalability? | 6 Benefits of APIs
Просмотров 2,2 тыс.3 года назад
What is an API, and how do APIs enable automation and scalability? | 6 Benefits of APIs
Why use events in microservices? | Event Driven Architecture 101
Просмотров 1 тыс.3 года назад
Why use events in microservices? | Event Driven Architecture 101
The difference between Messages & Events | Event Driven Architecture 101
Просмотров 2,2 тыс.3 года назад
The difference between Messages & Events | Event Driven Architecture 101

Комментарии

  • @DesuTechHub
    @DesuTechHub Месяц назад

    5:50 is the lesson from experience

  • @abhishekbajpai1208
    @abhishekbajpai1208 3 месяца назад

    good explanation,

  • @StephenTD
    @StephenTD 7 месяцев назад

    Thanks for this. I was looking forward to this video since watching your introduction to Amazon SQS

  • @Danieltammadge
    @Danieltammadge 7 месяцев назад

    Realised I used the wrong Icon in the presentation 😂

  • @hemanthaugust7217
    @hemanthaugust7217 7 месяцев назад

    at 2:50, when you said, after successfully processing the failed event, you'd republish all the holding events, in which topic will you publish them? I believe, you meant - retry topic, right? Not into the main topic, as the events can get out of order. However, there is a catch here. Once a failed event has been successfully processed, will you delete that record? If a new event is still in the main topic, it doesn't know that there is a failed event or holding event. So, are you suggesting that we always need to check both failed events table & holding events table? If there is any record in holding events table, add new events to holding table? If there is a lot of traffic, one failed event, can lead to all events going into holding table and slowing down the event processing. How do we solve this?

    • @Danieltammadge
      @Danieltammadge 7 месяцев назад

      In my video around the 2:20 mark, I explained the process of republishing events from the holding table back to a retry topic, not the main event topic. This approach is crucial for maintaining order and avoiding duplicate processing, especially since multiple consumers might be consuming from the same topic. Deletion of Records: The strategy for deleting records depends on your specific needs for tracking and managing data. You have the option to permanently delete the record or employ a soft delete method, where the record is marked as inactive or flagged without being physically removed from the database. Event Processing Awareness: It is essential for the consumer to perform checks as it processes events to determine if there are related failed or holding events. This step ensures that event handling is informed and accurate. Handling High Traffic and Failed Events: High traffic and the occurrence of a single failed event can indeed result in multiple events being redirected to the holding table, potentially slowing down overall event processing. To address this challenge, implementing a system that maintains strict ordering while still enabling event retries-without outright discarding them-is crucial. However, this setup introduces additional latency due to the need for extra lookups, presenting a trade-off between reliability and processing speed. Alternative Approaches: One possible alternative is to discard failing events and not attempt retries. This approach simplifies processing but risks losing important event data. Another strategy involves designing a more robust system that allows for retries of specific events multiple times by the same consumer. This could be achieved by partitioning topics in such a way that events related to each other are published to the same partitioned commit log, while unrelated events are directed to other partitions. This method helps in isolating problem events to prevent them from affecting the processing of unrelated events, thereby maintaining processing efficiency even under high traffic conditions. By adopting these strategies, it's possible to balance the need for event ordering, processing efficiency, and the capability to handle retries effectively.

  • @raj007ind
    @raj007ind 7 месяцев назад

    Hi Daniel, I am facing a similar problem. Please let me know where to create the Failing Even Log Table and Holding Table. Is it in a separate database or in the ksqldb for example?

    • @Danieltammadge
      @Danieltammadge 7 месяцев назад

      I recommend incorporating these tables directly within your existing application database. This approach enables you to fully leverage ACID transaction capabilities, ensuring that operations involving multiple tables and records are executed atomically. This means that all parts of a transaction are completed successfully together, or none are applied at all, enhancing data integrity and consistency across your system.

  •  8 месяцев назад

    (supportive comment)

  • @StephenTD
    @StephenTD 8 месяцев назад

    Wow this is a good video. Very in-depth. Looking forward to the next one

    • @Danieltammadge
      @Danieltammadge 8 месяцев назад

      More to come! And glad you enjoyed it

  • @xinyuzhang
    @xinyuzhang 8 месяцев назад

    Thank you!!!!!!

  • @nonamespls3468
    @nonamespls3468 8 месяцев назад

    you are good at explaining things, but just the noise in your audio just hurts the ear I couldn't continue, you should try noise cancelling, or edit your video to remove that noise

  • @ragingpahadi
    @ragingpahadi 9 месяцев назад

    Awesome explanation

  • @MikeCyrus-v3m
    @MikeCyrus-v3m 9 месяцев назад

    Good advise Daniel Cheers !!

  • @MikeCyrus-v3m
    @MikeCyrus-v3m 9 месяцев назад

    We need more !!

    • @Danieltammadge
      @Danieltammadge 9 месяцев назад

      I am currently scripting new videos, any particular topics/subjects you wish for me to delve deeper into?

  • @theokaralenka
    @theokaralenka 11 месяцев назад

    Very nice tutorial! Straight to the point, well explained, no unnecessary blah-blah. Thanks!

  • @mirambekmustafin
    @mirambekmustafin Год назад

    Hi! Thanks for the great video! Have you tried Apicurio Registry? Need your opinion :)

  • @mateusz0037
    @mateusz0037 Год назад

    Hello! Great material :) I've some questions regarding it. How do you deal with the scenario where you already process a failure message but before you process the holding events some new event is consumed from the topic? If it is processed first, the current state will be overwritten by the holding events, right? So, what comes to my mind is to check also holding events, but what then? Attached the newest event at the end and fire holding events? You don't mention it during the video so there is a high chance that I got something wrong, I would appreciate it if you clarify it.

    • @Danieltammadge
      @Danieltammadge 7 месяцев назад

      Thanks for commenting In answer to your question Based on what you’ve described, here are a few strategies to manage such scenarios effectively: 1. Event Ordering and Timestamps: Ensure that every event in your system is timestamped or has a sequence number to maintain the order of events. This allows you to process events in the correct sequence, even if they are received out of order. 2. State Management with Versioning: Implement versioning in your state management. When processing an event, check that the event’s version matches the current state version or the expected next version. This approach helps in avoiding situations where the state is incorrectly overwritten by an out-of-order event. 3. Event Handling Logic: When processing the holding events, you can check if a newer event has been received that affects the current state. If so, you may need to re-evaluate the holding events in the context of this new state. This might involve discarding some holding events, updating them, or processing them differently based on the new information.

  • @abhishekanand2163
    @abhishekanand2163 Год назад

    Hi Daniel , I am little confused , what happens when consumer index 4 fails when its in retry topic ? Should we put that event into failing event table along with consumer index 5 ? Also below is my understanding , can you please rectify if anything wrong I interpreted .. Main consumer -- check if any event for the incoming customer is is present in the failing table ? If yes , put the incoming event to Holding table , If no , process it , If processed successfully , acknowledge the offset else put into failing table. Retry Producer - Polls the failing table , creates an event and pushes it to retry topic Retry Consumer - check if any event in the retry topic , processes it , if successful , gets all the messages from holding table and pushes it to retry topic. What happens if three messages were in holding table for same customer id and all three got pushed to the retry topic and first message fails ? What does retry consumer do ?

    • @Danieltammadge
      @Danieltammadge 7 месяцев назад

      Main Consumer Workflow: 1. Incoming Event Handling: When an event arrives, the main consumer checks if there are any existing events for the same customer ID in the failing event table. • If Yes: The incoming event is moved to the holding table, to be processed after the failing events are resolved. • If No: The event is processed. • If Processed Successfully: Acknowledge the offset. • If Processing Fails: The event is moved to the failing event table. Retry Producer Workflow: • Polls the Failing Table: Regularly checks the failing table for events. • Event Reprocessing: For each event found, it creates a new event and publishes it to the retry topic. Retry Consumer Workflow: • Processing Retry Events: When an event is present in the retry topic, the retry consumer attempts to process it. • If Successful: It retrieves all related messages for the same customer ID from the holding table and publishes them to the retry topic to be processed in sequence. • If Fails: The event should ideally be moved back into the failing table or a similar mechanism, possibly with an incremented retry count or a delay before the next attempt. Handling Failures in the Retry Topic: • Failed Retry Events: If an event from the retry topic (e.g., with consumer index 4) fails: • This event should be placed back into the failing event table. Optionally, include a retry count to avoid infinite retries. • Regarding consumer index 5 and others: If they depend on the successful processing of the previous event, they should also be managed to preserve the processing order. This might mean delaying their reprocessing or placing them in a sequence that respects their dependencies. Special Considerations: • Multiple Messages for the Same Customer ID: If multiple messages for the same customer ID are in the holding table and are pushed to the retry topic, but the first message fails again: • The retry consumer should handle this by either pausing the processing of subsequent messages for this customer ID until the failing message is successfully processed or by re-evaluating the order and dependencies of these messages. The goal is to ensure that messages are processed in a manner that respects their logical sequence and dependencies. Summary of Corrections/Clarifications: • Failing Events Handling: Your understanding is mostly correct. It’s crucial to ensure that failing events, especially in a retry scenario, are handled in a way that respects the order of events and their dependencies. Failed retry events should be carefully managed, possibly with incremented retry attempts, and considering the impact on related events. • Dependency and Order Management: For events related to the same customer ID or having logical dependencies, ensure the processing order is maintained, particularly when dealing with failures and retries. This may involve sophisticated logic in the retry consumer to handle dependencies correctly.

  • @MrBillJDavis
    @MrBillJDavis Год назад

    This is great, thank you. It would be really helpful to talk about what issues might lead to a message getting retried and how that might dictate deciding on X number of retry topics.

    • @Danieltammadge
      @Danieltammadge 7 месяцев назад

      Thanks Here are some common reasons for what might lead to a message might get retired: 1. Transient Failures: If an event fails due to a transient issue (e.g., a temporary network failure, a dependent service being momentarily unavailable), retrying the event after a delay might result in successful processing. Moving the event to a retry topic allows the system to handle it separately without blocking the processing of new events. 2. Rate Limiting and Backpressure: External systems or APIs might enforce rate limits, and surpassing these limits can result in failed event processing. Publishing failed events to a retry topic enables you to implement backoff strategies and control the rate at which you attempt to reprocess these events. 3. Resource Contention: If processing fails due to resource contention (e.g., database locks, high CPU utilization), moving events to a retry topic allows the system to alleviate immediate pressure and retry processing later, possibly under more favorable conditions. 4. Error Isolation and Analysis: Moving failed events to a separate topic makes it easier to isolate and analyze errors without disrupting the flow of successfully processed events. This separation facilitates monitoring, debugging, and fixing issues specific to the failed events. 5. Prioritization of Events: In some scenarios, certain events might be more critical than others. If an event fails but does not immediately need to be retried (due to lower priority), it can be moved to a retry topic, allowing higher-priority events to be processed without delay. 6. Maintaining Event Order: If the order of events is crucial, and a failed event needs to be processed before subsequent events, retrying the event while continuing to process others might violate the order. By using a retry topic, you can control the order of reprocessing to ensure that events are handled in the intended sequence. 7. Handling Poison Messages: Some events might repeatedly fail processing due to being malformed or due to an issue that cannot be resolved immediately (poison messages). Moving these events to a separate topic prevents them from repeatedly causing failures in the main processing flow and allows for special handling or manual intervention.

  • @stasthesauce4641
    @stasthesauce4641 Год назад

    bruh your vids, are hecka underrated, thanks again for learning and taking this time to do this.

  • @ikechimike6894
    @ikechimike6894 Год назад

    Please include subtitle I'm h-impaired Please

    • @Danieltammadge
      @Danieltammadge 7 месяцев назад

      I’ll be checking my videos soon to ensure they are have subtitles

    • @ikechimike6894
      @ikechimike6894 7 месяцев назад

      @@Danieltammadge thanks

  • @StephenTD
    @StephenTD Год назад

    Intresting Orchestration when invoking functions in the desired sequence and them trigging other systems across domains, owned by other teams, using choreography using events. I’ve used AWS Step Functions and my team have used this approach but we thought we were implementing an anti pattern.

  • @MarcoLenzo
    @MarcoLenzo Год назад

    Thanks for the video Daniel. I love the topic and it was nice to see the way you explain it!

    • @Danieltammadge
      @Danieltammadge Год назад

      Thank you Marco for taking the time to watch and for leaving a comment. Glad you found the video useful

  • @luckys9310
    @luckys9310 Год назад

    Do u have a udemy course?

    • @Danieltammadge
      @Danieltammadge Год назад

      No I do not. Everything I have published so far is free on RUclips or my blog

  • @javisartdesign
    @javisartdesign Год назад

    c4 model and 4+1 view model are really useful just for you were saying. great advice so far!

  • @javisartdesign
    @javisartdesign Год назад

    Really usefull! basicallyis about using SSE, Websocket, long pooling, SignalR so fronts can check and reacto to backend status changes

    • @Danieltammadge
      @Danieltammadge Год назад

      Thanks again Javis for taking your time to comment.

  • @javisartdesign
    @javisartdesign Год назад

    Do you recommend Outbox pattern or publishing a message and consume it again just for update the database. This can drive to infinite loops if you do not pay attention however you save resourcesand money so you don't have to maintain the CDC, kafka connect by yourself.

    • @Danieltammadge
      @Danieltammadge Год назад

      Hopefully I have understood your question. But here we go What is not recommended is to consume an event, and perfrom a database transaction which updates/inserts records and then after that database activity has finished/committed, to, within the same consuming process to then publish another event. Because what happens if the event publishing fails? You have lost Data and data/systems will be inconsistent. So it’s recommended to only perform one network operation within a process, unless you can ensure state management, retry ability and rollbacks/reconciliation processes. So use the outbox table pattern and use CDC on that table, or CDC on an entity table (i.e. employees table). Let’s use a different example If I needed to trigger a process when an object is uploaded to an AWS S3 bucket. I could have a process which puts the object to the S3 Bucket and after it is uploaded I could have the process publish an event. But what do I do if the event publishing fails? Do I delete the object? What do I do if the object deletion fails? I am going to have objects which are not going to be processed. I could develop a housekeeping process to clean up and find orphaned objects. I could. Or I enable CDC on the S3 bucket and whenever an object is written to S3 an event is published. By a solution which is fully tested, stable, and highly performant. From my experience not pain for CDC ends up being more costly in the long run.

    • @javisartdesign
      @javisartdesign Год назад

      @@Danieltammadge that's correct, I mean you can also use another technique to publish and consume (in order to persist) your own event, to ensure you publish and store the data, so you do not perfom two operation at the same time or any kind of transaction protocol such as phase commits(2pc). great comment, thanks

  • @sagarbhong-f5q
    @sagarbhong-f5q Год назад

    Thank you for sharing @Daniel

  • @javisartdesign
    @javisartdesign Год назад

    Really good explanation. 12 factors apps is a really good starting point to follow to ensure distributed, scalabable and cloud native applications

    • @Danieltammadge
      @Danieltammadge Год назад

      Thank you for watching and leaving your comment

  • @himanshutomar3512
    @himanshutomar3512 Год назад

    Good Exlpanation, I have one question if the broken message will never gets published to Kafka due to schma validation on producer side then what is the need of having schema registry configuration on conusmer side. Thanks in advance :)

    • @Danieltammadge
      @Danieltammadge Год назад

      Good question! The consumer should ensure that the data being consumed in consistent with the schema. Zero Trust, always put guards in place and never trust upstream services. How do you know if an upstream service validated the event in the first place?

  • @rajeshjaveri71
    @rajeshjaveri71 Год назад

    Nice!!

  • @sergey-kotov-uk
    @sergey-kotov-uk Год назад

    Unfortunately, I could not find anywhere clear detained explanation of Principle 8 Concurrency. As I understand it, one of the consequences is a suggestion to avoid using multithreading code which introduces complexity and has hard upper limit in terms of scalability, and it is easier to scale by increasing a number of processes/service instances, on system level, rather than increasing threads, on service (deployable unit) level. Am I right?

    • @Danieltammadge
      @Danieltammadge Год назад

      Thanks for asking your question My approach principle 8 as if it is describing breaking systems down into microservices or cells which can be scaled independently using load balancers or even better in my view loosely coupled communication like message queues or events streams. And how do you enable services or functions to scales horizontally, well you ensure they are stateless in the approach that the input contains what a process requires, or it can use a networked service and fetch/lookup what it requires.

  • @skblabla
    @skblabla Год назад

    regarding rollback of work done by A, B, C .. what if rollback passes for A and B but fails for C. What will orchestrator do and what will be the final outcome?

  • @skblabla
    @skblabla Год назад

    Hi Daniel, what is the use of partition and offset stored in holding table?

    • @Danieltammadge
      @Danieltammadge Год назад

      Using this definition of a partition. In Apache Kafka, a partition is a unit of parallelism for managing and storing data. Each topic in Kafka is split into one or more partitions, which are essentially ordered, immutable sequences of records. Each partition can be thought of as an independent "mini" Kafka topic, with its own set of messages, offsets, and metadata. Partitions enable Kafka to provide high throughput and scalability, as they allow multiple consumers to read from a topic in parallel. And the offset is the position of a event/message in a partition. By storing the two. You know what partition the event was from and it’s position therefore as you consume later events you are able to validate which events are required to be processed after the current failed event is processed.

  • @caitlynlawrence6423
    @caitlynlawrence6423 Год назад

    Promo*SM 🙋

  • @StephenTD
    @StephenTD Год назад

    Awesome video, very interesting especially the bit about combining principles of 12 factor apps when developing microservices

    • @Danieltammadge
      @Danieltammadge Год назад

      Thanks for taking the time to comment. Glad you liked it

  • @hernanisilang1243
    @hernanisilang1243 Год назад

    Have you ever implemented a circuit breaker in Kafka consumer?

    • @Danieltammadge
      @Danieltammadge Год назад

      Yes. It follows the same technique used as retry topic consumers. I.e if circuit is open then wait a predefined time before trying again.

  • @kitkarson4226
    @kitkarson4226 Год назад

    I am learning a lot from you. Thank you! I have a question. The event-carried-state-transfer allows other services to keep local copy of the published events. Then what is CQRS? both sound same to me.

    • @Danieltammadge
      @Danieltammadge Год назад

      Thank you. Event-carried-state-transfer is a pattern that uses events to transfer the state of an object from one service to another. Whenever an event occurs in one service, a message is sent to all other services that need to know about the event. This message contains the state of the object, which allows other services to update their own state accordingly or to process the event without requiring fetching the state from an external source, often the publishing system. Compared to CQRS, which is a pattern that separates the read and write operations of a system into two different services, the write service is responsible for handling commands and updating the state of the system, while the read service is responsible for handling queries and retrieving data from the system. This segregation allows for better scalability and performance optimization. For example, you may be writing to a relational database where data is normalized. However, performing complex queries on relational databases can be slow and may not fit your requirements, whereas using an OpenSearch database might be better for querying and fetching data. So you implement CQRS and stream the changes, the change events, from the relational database to Opensearch denormalising and tuning the data for fetching.

    • @kitkarson4226
      @kitkarson4226 Год назад

      @@Danieltammadge wow..sir.. thanks a lot for taking time to respond in detail. 1 follow up question if you do not mind. If I have a restaurant-service, payment-service, delivery-service, can I have them as restaurant-service (command), payment-service (command), delivery-service (command) as 3 command handling services and 1 query-service (which collects data from all other services to build materialized view to query)?

    • @Danieltammadge
      @Danieltammadge Год назад

      ### Overview In terms of performance, it's possible to have three command services and one query service. However, it's important to consider the tradeoffs and whether this is the most efficient solution. ### Network Operations If the client queries three different endpoints, there will be multiple network operations. This can result in longer latency and more compute power needed. Additionally, if one of the endpoints fails, the client may not be able to receive all the data. ### Materialized Views An alternative to querying multiple endpoints is using a materialized view. This denormalizes the data and stores it in a way that allows for quicker queries with less compute power required. However, materialized views may not always be consistent and may have lag in distributed services. It's important to consider whether this is acceptable for the use case. ### Considerations When deciding whether to use multiple command services and one query service or a materialized view, consider the ordering of events, the need for consistency, and the potential tradeoffs between performance and efficiency. Overall, while it's possible to have three command services and one query service, it's important to carefully consider the best solution for the specific use case. However, if you are asking, is this an acceptable approach to certain use cases, yes, yes it is.

    • @kitkarson4226
      @kitkarson4226 Год назад

      @@Danieltammadge I have never seen anyone like you who answers the question very seriously with much details. I really really appreciate it. God bless you. Please keep releasing new videos. 🙏

  • @mattyzacharia
    @mattyzacharia Год назад

    Well explained

    • @Danieltammadge
      @Danieltammadge Год назад

      I am glad you found it helpful and taken the time to comment

  • @gregorycook5305
    @gregorycook5305 Год назад

    Thanks for the video sir. Great job.

  • @skblabla
    @skblabla Год назад

    Should this be done in cases where create customer failed due to a third party service downtime? As the number of events failing in such case will be huge.. what is the recommendation for such scenario? I need to read more on error handling i guess..but want to know what kind of exceptions can be handled this way..will the approach change if we dont know the volume of exceptions? As we wont know how long the third party service downtime will last for?

    • @Danieltammadge
      @Danieltammadge Год назад

      In event-driven systems, you encounter two types of issues: transient faults and errors due to poison events. A transient fault occurs when your system is attempting to call an external component, external service, or third-party service. In most cases, these faults are self-recovering and intermittent, and retrying the call to the external service is usually sufficient. However, you should design for long-term outages and implement the circuit breaker pattern to ensure that your processors wait for the downstream service to recover before resuming processing if a transient issues arise. On the other hand, poison events are a type of error that can be caused by the publisher or the processor/consumer. For these errors occurring due to the event then you should publish a retry event to a retry stream or to dead letter queue to be picked up by another process. This approach ensures that processing continues for other events and is not halted by a single poison event.

    • @skblabla
      @skblabla Год назад

      Thanks Daniel..this makes complete sense.. i started with EDA systems with wrong understanding that its simple but the more i read i understand its a wider topic.. thanks for sharing your knowledge on this.

  • @sergey-kotov-uk
    @sergey-kotov-uk Год назад

    Thanks for this and other brilliant videos. Would you be so kind to share please what tools do you use to draw diagrams for your videos? And what tools you would recommend for an architect role?

    • @Danieltammadge
      @Danieltammadge Год назад

      I use Lucidcharts (lucid.app/) for my diagrams, using actions to hide and show layers. By using layers and actions together, I record my iPad screen and then scale the screen recording into the videos. I recommend the following tools for architects: - Lucidcharts (lucid.app/): A good diagramming tool for creating diagrams for documents and presentations. But also the free [draw.io](draw.io) is also good, when working with teams - Archi (www.archimatetool.com/): An open source modelling toolkit for creating ArchiMate models and sketches. It is good for maintaining the current and future architecture state for both technology and the business. - [Notion.io](notion.io/): Better than Excel and really good for maintaining notes, dependencies (as well as component/interface lists), and relationships between notes, requirements, and other artifacts. - [Stoplight.io](stoplight.io/): I have not found a better API design and documentation tool for HTTP/REST APIs than Stoplight. - Jira, Confluence, and Bitbucket. - sequencediagram.org/: An amazing tool for creating sequence diagrams. - jsoneditoronline.org/: An awesome tool for designing JSON objects.

    • @Danieltammadge
      @Danieltammadge Год назад

      Really appreciate that you took the time to comment and ask your question. Thank you Sergey

  • @cheequsharma7391
    @cheequsharma7391 Год назад

    Great video. I faced this issue and such issue are tough to catch when no clue about it.

  • @timmkrause6684
    @timmkrause6684 Год назад

    With Azure Schema Registry you mean the Schema Registry Feature within Event Hub Namespaces?

    • @timmkrause6684
      @timmkrause6684 Год назад

      As we want to use Azure Service Bus in our case which does not support schema registries I need to create an Event Hub Namespace while only using the schema registry feature?

    • @Danieltammadge
      @Danieltammadge Год назад

      Yes in your case it looks like this is the way forward for you learn.microsoft.com/en-us/azure/event-hubs/create-schema-registry?source=recommendations I am not too familiar with Azure as I spend the majority of my time with AWS

    • @Danieltammadge
      @Danieltammadge Год назад

      Thank you for taking the time to watch and share your experience.

    • @timmkrause6684
      @timmkrause6684 Год назад

      @@Danieltammadge Thank you for the confirmation. 👍🏼

  • @manideepkumar959
    @manideepkumar959 Год назад

    if handson is also there it would have been better,cant get most out of it

    • @Danieltammadge
      @Danieltammadge Год назад

      Hopefully the following will help danieltammadge.com/2023/02/delaying-apache-kafka-retry-consuming/ Thanks for watching and taking the time to comment

  • @StephenTD
    @StephenTD Год назад

    I see that you reuploaded with captions 😂. Keep it going

  • @LucasPersson-yb5sm
    @LucasPersson-yb5sm Год назад

    Debezium is a nice OSS tool for reading the transaction log for MySQL and a bunch of other DBs. It is for Kafka but their engine is open so you can connect to AWS Kineis too and then I assume other systems.

    • @Danieltammadge
      @Danieltammadge Год назад

      I have not used Debezium myself. As I run my systems in AWS for now an I prefer managed, or as a service over self-run. So my systems use AWS Data Migration Service to stream to AWS Managed Kafka Service Topics. Thank you for taking the time to comment and share your experience.

    • @LucasPersson-yb5sm
      @LucasPersson-yb5sm Год назад

      @@Danieltammadge I also run in AWS using AWS Aurora (MySQL flavour) and MKS (Kafka) and Debezium is a Kafka connect "app" which is also run as a managed service by AWS. The "only" trick I had to do was to implement a Kafka config plugin so that Debezium could pick up passwords for the DB from AWS Parameter store. And you have to enable the bin log in Aurora which normally you don't enable.

  • @dynojones
    @dynojones Год назад

    really helpful content

    • @Danieltammadge
      @Danieltammadge Год назад

      Glad it was helpful! And thank you for watching

  • @paraspaul4837
    @paraspaul4837 Год назад

    Great video on this interesting use case. Could you explain more about your comment at 1:43 where you said "ability to have multiple consumers independently consuming the stream" . Do you suggest that other technologies like kafka aren't capable of this?

    • @Danieltammadge
      @Danieltammadge Год назад

      No. I was referring that a simple queuing service like SQS or a a message broker like activemq or RabbitMq (running under Amazon MQ, if on AWS), has the concept of a queue which are designed to be consumed by one type of consumer. When multiple consumers are subscribing or pulling from the same queue. It acts as a load balancer and can distribute messages between them. Amazon Kinesis and Apache Kafka full under streaming or logs. A stream or a log maintains the immutable sequence of events for a retention period, this allows consumers to consume the stream or log independently from each other

    • @paraspaul4837
      @paraspaul4837 Год назад

      @@Danieltammadge Thanks , this does clear my confusion.

    • @Danieltammadge
      @Danieltammadge Год назад

      Perfect. Thanks for watching Paras

  • @chessmaster856
    @chessmaster856 Год назад

    Any code or only this. Anybody can write code but only some can talk

    • @Danieltammadge
      @Danieltammadge Год назад

      ChessMaster thank you for taking the time to comment. Quick question is your comment a question?

    • @chessmaster856
      @chessmaster856 Год назад

      @@Danieltammadge yes. Can you provide some. Ode configuration examples a out how many error scenarios need to be handled in a messes queue.

  • @StephenTD
    @StephenTD Год назад

    I like how you can proxy to both services from the AWS API Gateway. And I agree I think the first pattern should be referred to as storage first instead of the second one.

    • @Danieltammadge
      @Danieltammadge Год назад

      Thanks for taking the time to comment. Glad you enjoyed it.