Это видео недоступно.
Сожалеем об этом.

Lesson 1 - Event-Driven Architecture: Request/Reply Pattern

Поделиться
HTML-код
  • Опубликовано: 14 авг 2024
  • In this first lesson Mark talks about how to do request/reply processing within an event-driven architecture. Even though messaging is an asynchronous protocol, there are times when we need to wait for a response when using messaging and queues. This short architecture lesson will show you how to do this. Stay tuned each Monday for more lessons in Software Architecture at www.developert....

Комментарии • 71

  • @rrckguy
    @rrckguy 4 года назад +2

    The first time I subscribed to a channel just looking at the profile. Thank you very much for your time.

  • @pankajsinghv
    @pankajsinghv 4 года назад +1

    Awesome articulation of request response reply in event driven architecture

  • @michaelsalazar8587
    @michaelsalazar8587 2 месяца назад

    Thanks Mark Richards, new follower here.

  • @shery605
    @shery605 Год назад

    I am glad that I found ur channel ❤

  • @DanielGomez-gv5xc
    @DanielGomez-gv5xc 6 лет назад +4

    This is an awesome idea!, i'm looking forward to be able to do this transition myself. Thank for sharing your knowledge

  • @sounderarajan10
    @sounderarajan10 3 года назад

    Thanks for your remarkable thought to make other people evolve themself. Hats off...

  • @alexsharma
    @alexsharma 4 года назад

    Windows form applications are based on event driven architecture. Generally windows forms are synchronised but to implement async operations, we used threads, message queue. All web applications are based on request reply messaging approach. To bring async operations, we used Ajax calls.

  • @Raptor-jv7fi
    @Raptor-jv7fi Год назад

    Absolutely amazing explanation

  • @BharCode09
    @BharCode09 4 года назад +4

    Awesome video! I just found a treasure woohoo!

  • @LimitedWard
    @LimitedWard 2 года назад +1

    Rather than set a blocking call to wait for the response, why not use an in-memory cache to store the state of each request? Then whenever a response comes in, pull the top one off the queue and look up the cached request using the correlation ID to continue processing it.

  • @saajann29
    @saajann29 5 лет назад +3

    Thanks for the video, Mark.
    Why does the sender need to do a blocking wait in your example (at 2:30 )? Doesn't this essentially make the request synchronous from the sender's point of view?

    • @markrichards5014
      @markrichards5014  5 лет назад +4

      With request/reply messaging, when sending a request, necessarily you need to wait for the response. To your point, you could certainly do some other processing between the sending of the message and the receiving of the message. However, at some point you will still need to wait for the response, which is a blocking wait. If you do other processing before waiting, chances are the response will be in the queue, minimizing that blocking wait.

  • @vickyvivek3286
    @vickyvivek3286 3 года назад +3

    How can the id be uniquely generated at the client side in a distributed systems environment ?
    Are different clients using different topics/ queues here? If then, wouldn't that be costlier?
    Can we let the service generate the CID and return that in a 202 response so that the client can use that later to check the result?

    • @markrichards5014
      @markrichards5014  3 года назад +1

      Hi Vicky, typically the correlation ID comes from the message ID, which is generated using a UUID/GUID, so you are correct in saying that there is a possibility (albeit rare) of a collision of the correlation ID. Your technique is one I have used - use the message ID (or generate your own UUID) and append the machine name to the custom correlation ID, therefore better guaranteeing uniqueness.

  • @cseshivaprasad1985
    @cseshivaprasad1985 5 лет назад

    Hi Mark, Great video, gave some thought on this pattern and it's practical adoptability in the applications. Here are my questions.
    1. As we are relying on standard MQs which are standalone softwares, how do we go about scaling this model ?
    2. For every request/response we are creating a connection to the MQ, how this model works for highly concurrent applications ?
    3. What are the practical use-cases where this model is a great fit ?

    • @markrichards5014
      @markrichards5014  5 лет назад +3

      Hi Shivaprasad, scaling and fault tolerance are usually handled through clustering broker instances or using a multi-queue pattern where the producers send messages to different brokers in a round-robin fashion (I'll be doing a lesson on this sometime in September). Regarding request/reply, you only have one single persisted connection to the broker - the session (JMS) or channel (AMQP) is a multiplexed part of the connection so that you don't continually reconnect to the broker each time. Practical use cases are anytime you want to use messaging rather than REST - in some cases you can get better responsiveness because control comes back to the producer after the send, and before the blocking wait, allowing you to do some work without having to waste time waiting for the reply (a lesson that is also forthcoming).

  • @umamaheshsukamanchi
    @umamaheshsukamanchi 5 лет назад +1

    Thanks for extraordinary content and very clear explanation...

  • @sant4398
    @sant4398 Год назад

    Great video! Thank you! Can I ask you a question - why, from your POV, most of the architecture trainings I've met, are come from engineers with Java background, while .Net looks like lay on a side. I'm from .Net world and it bothers me.

  • @mahdi5796
    @mahdi5796 10 месяцев назад

    Thank you. But can you please explain why each message has CID and ID and why do you swap them? What's wrong with having just one id?

    • @markrichards5014
      @markrichards5014  10 месяцев назад +2

      The return message ALSO has it's own unique message ID, which is why the correlation ID is used to match-up these messages and pair them together. Every message that gets send always has its own unique message ID

  • @arpit17792
    @arpit17792 3 года назад +1

    Good explanation. But what is exactly that we are gaining here as compared to http request-response protocol. Thanks in advance.

    • @markrichards5014
      @markrichards5014  3 года назад +3

      First, depending on your RESTful topology we are avoiding the gateways, web servers, load balancers, discovery servers, and everything else that get's in the way of actually making a "simple" RESTful call to a service. Second, because it is async, while the other service is getting data for me, I can do other things - similar to future or promise processing. I should do a video on that!

  • @mahmoudebada4025
    @mahmoudebada4025 2 года назад

    Awesome, Thank a lot for consie, clear and simple content

  • @santuNLD
    @santuNLD 6 лет назад +3

    Thank you for this first lesson. Is it possible to have some links of exercices between to lessons to go a bit deeper on the subject ?

    • @markrichards5014
      @markrichards5014  6 лет назад +3

      What a great idea! I will add that to the ending slide starting with lesson 3.

  • @aparfeno
    @aparfeno Год назад

    Mark, these are great videos. Could you reorder thr playlist in chronological order so that one can listen to all your lessons in order?

    • @markrichards5014
      @markrichards5014  Год назад

      Hi Alex, you can see the order here: www.developertoarchitect.com/lessons/

  • @glowiever
    @glowiever 3 года назад

    how expensive it is to create a temporary queue in something like apache kafka or rabbitmq?

  • @ahmedzaki4006
    @ahmedzaki4006 3 года назад

    Thank you for explanation.

  • @miguelugalde2094
    @miguelugalde2094 3 года назад

    Silly question: Why do we use a queue if we end up filtering on that queue anyways? Why do we care to maintain order among requests? Why not use a hash and look up by correlation ID(I am assuming correlation IDs are unique)? Would love to know what the FIFO is doing for us 😁 love the videos!

    • @markrichards5014
      @markrichards5014  3 года назад

      Correlation IDs are unique, and usually set to the message Id so that you know the response you are getting back is tied to the message send, particularly if you are sending multiple request/reply messages at the same time. Request/Reply can sometimes improve response time because you can do other things in the service between sending the message and waiting for the reply - the power of async...

    • @LimitedWard
      @LimitedWard 2 года назад

      @@markrichards5014 this doesn't really answer the original question. Your response justifies the use of the correlation ID, but what they are asking about is why use a queue at all if you're just gonna pull items out of the queue out of order? It seems like the wrong data structure to use at that point.

  • @KresnaPermana
    @KresnaPermana 2 года назад

    i was thinking about this but doesn’t know that pattern is request-reply pattern, thanks!!

  • @prashantjha439
    @prashantjha439 3 года назад

    So will the sender will keep the client connection open until the asynchronous event communication happens ?
    Does it also need to save initiator thread state before starting this async comm. ?
    When it gets a response back how does it know to which client connection this response need to be sent ?

    • @markrichards5014
      @markrichards5014  3 года назад

      Generally a producer will keep a consistent connection to a message broker and the queue. All of the detailed threading and connection logic is handled by the underlying API and message broker. when using request/reply, the producer looks for it's response through a Correlation ID, which is usually set to the original message id (temporary queues can be used as another alternative, although I much prefer using the Correlation ID technique.

  • @HugoAndresBuitrago
    @HugoAndresBuitrago 6 лет назад +1

    Very useful Mark.Thanks. Sometimes there is a confusion between the usage of MessageId and CorrelationId. Is there any value of using one or the other?

    • @markrichards5014
      @markrichards5014  6 лет назад +2

      Hi Hugo, normally the correlation id contains the original message id since it is unique, but you could always create your own unique UUID and use that instead. I have seen cases where some folks use a custom dedicated message header variable for the message correlation, but that practice creates a tight coupling between senders and receivers. Use of a correlation id is standard practice. For example, using your own variable in the header, say "reply_id" requires all parts of your system to know that the "reply_id" is used to correlate messages. Not a good idea in my opinion.

    • @Vibhor8apex
      @Vibhor8apex 3 года назад

      @@markrichards5014 I agree with you, using standard practices eases the job of someone who is maintaining the applications.

  • @iorch82
    @iorch82 2 года назад

    Hey Mark, thanks for this great lesson. One question I have for you. Do you see this as a suitable approach to consume a backend event driven architecture from an API Gateway? This would be the flow: UI HTTP requests some action to the API gateway; then api gateway publishes the message request and waits synchronously to the response applying this pattern, replying eventually. I think it could work but the approach seems a bit alien as per current industry practices.

    • @markrichards5014
      @markrichards5014  2 года назад +1

      Well, in the case if an API gateway, there might not be much of an advantage because the API gateway really doesn't do much additional work while waiting for the reply to come back. The use cases I would use this pattern for is when the API gateway is communicating with a service that only accepts messages, or to further decouple the API gateway form the services it is communicating with (IOW, API is sending to a queue rather than a service endpoint, so it has less knowledge about what service it is directing the call to).

  • @scorpioreloaded1734
    @scorpioreloaded1734 4 года назад

    Hi Mark, at time 3:37 when the message was read from queue for CID:124 by message selector, I didn't get how it manages to get the last value from queue, where the queue are actually (FIFO)?

    • @markrichards5014
      @markrichards5014  4 года назад +1

      Adding a message filter (also called message selectors) allows you to filter specific messages, but those filtered messages are still in FIFO order. Any message not selected by the message filter still remain on the queue for another consumer to pick up. You can see an example by going to timjansen.github.io/jarfiller/guide/jms/selectors.xhtml.

  • @MrMikomi
    @MrMikomi Год назад

    I am confused. You talk about blocking on a response to a message sent to a QUEUE. The whole point of queues and topics (I thought) was to make a producer-consumer communication asynchronous and loosely coupled. I have never seen this blocking on a response from a queue. Is it just that it's very late, I'm very tired and my brain is kaput? My experience is sender sends to q, MDB listens to q, and that's it. No blocking. Thanks.

    • @VladPatras
      @VladPatras Год назад

      This pattern is a direct alternative to using a REST/HTTP call, which is synchronous. For example a portal backend needs to load data for a page by aggregating several services. The portal could do other things after requesting the user's first name, but it will eventually have to block in order to respond to the browser call.
      While not highly asynchronous the services are still loosely coupled in terms of knowledge. In a "simple" REST call you need to know the location of the other service (or use a service locator) and you have to worry about all the different ways the call can fail (no route, timeout, auth. fail etc.). With this approach you only have to connect to the broker (once for all services and calls) and set a timeout on the blocking call.

    • @markrichards5014
      @markrichards5014  Год назад

      Check out Lesson 142 at www.developertoarchitect.com/lessons/lesson142.html - that might clear up your confusion.

  • @hardlyconfused3541
    @hardlyconfused3541 3 года назад

    Good presentation. Could you also explain when to use which method?

  • @lonez5228
    @lonez5228 4 года назад

    Very nice video! Thanks Mark.
    I have a question: isn't it too expensive to create and delete queues all the time in this temporary queue technique?

    • @markrichards5014
      @markrichards5014  4 года назад +2

      It is, which is why I prefer the correlation ID technique. It's not so bad for light loads and throughput, but as throughput increases, you might see a little performance hit with the temporary queue technique.

  • @chandrahasan9643
    @chandrahasan9643 6 лет назад +1

    Thanks mark for this short lecture. Is there any way i can effectively implement request/reply pattern using kafka, since i need higher throughput stream processing in my requirements, i really can't use rabbitmq also i would be needing a request/reply pattern and i dont want to use rabbitmq only for this.

    • @markrichards5014
      @markrichards5014  6 лет назад +9

      Kafka is a publish-and-subscribe (pub/sub) broker, so implementing request/reply would be rather difficult in Kafka in that it is more of a broadcast model rather than a queuing model (point-to-point messaging). Request/reply processing is better suited for standard messaging (RabbitMQ, ActiveMQ, etc.) which leverages the point-to-point messaging model with queues or exchanges as in RabbitMQ.Personally I've never thought about or tried to implement request/reply in Kafka, so I'm not even sure where I'd even start! I'd suggest taking a look at the Streams API (Core API wouldn't work with this) and looking at the MetaRecord
      to maybe pass correlation ID's around. It would be very tricky though in that you aren't really sure who is subscribing to a topic and who is returning back results.

  • @dmitryponyatov2158
    @dmitryponyatov2158 5 лет назад

    How can I adapt event driven to extra small embedded has say 20K(!)bytes of RAM in a whole, and mix it with tiny data package queueing? I mean IoT nodes and sensor swarms

    • @markrichards5014
      @markrichards5014  5 лет назад

      Hi Dmitry, I don't have expertise in IoT messaging communications, so alas, I am not sure of how to adapt messaging for the use case you described. If I run across anything I will be sure to add the info in this comment thread.

  • @alexsharma
    @alexsharma 4 года назад

    Great video Mark!!!!
    One quick question - provided code is in Java but I am working on .NET side. Do you have or do you know any .NET C# related code site for the same architecture?

    • @markrichards5014
      @markrichards5014  4 года назад +2

      Common brokers for .NET C# include MSMQ (of course) and RabbitMQ. You can check both of those sites for some examples...

  • @romanterendiy
    @romanterendiy 2 года назад

    good content, thank you

  • @prannoyroy5312
    @prannoyroy5312 2 года назад

    Fabulous!

  • @neerajmahajan1305
    @neerajmahajan1305 6 лет назад

    Hi Mark
    What would happen in your example code(01_10_Request_Reply_Code), if the message requester doesn't receive a response from the consumer ? Will it remain blocked or there would be some timeout error after some time?

    • @markrichards5014
      @markrichards5014  6 лет назад

      Great question! The receive() in JMS and the Next Delivery() in AMQP (RabbitMQ) both take a timeout value. If none is specified (as in my code), then the requestor will in fact wait forever. A good practice is to specify a timeout value (in milliseconds) for each of those methods. For example, in JMS, a sender.receive(3000) will wait 3 seconds. If a response has not been received in that time, the method will end and return a NULL, which you can check to know something went wrong and to maybe try the request again.

    • @neerajmahajan1305
      @neerajmahajan1305 6 лет назад

      Thanks for your quick response and detailed explanation. All of your video courses are excellent and I like when you share good/best practices along with the course material.

  • @sandeshjayaprakash8946
    @sandeshjayaprakash8946 5 лет назад

    Mark i like your videos.
    Why is this called as event driven ?as we are waiting on a response message it's more of synchronous call . Please clarify

    • @markrichards5014
      @markrichards5014  5 лет назад +5

      You are right, this mimics a synchronous call. However, even with these "pseudo-synchronous" calls we still get several advantages over synchronous protocols (like RPC or REST). First, services or processors are better decoupled from one another, and I can do additional processing after I send a message before having to do the blocking wait for the reply. If you are using messaging for async without the need of a reply, with this pattern we can still leverage messaging for those calls where we are expecting a reply without having to turn to other synchronous protocols.

  • @abhisgup
    @abhisgup 3 года назад

    www.enterpriseintegrationpatterns.com/patterns/messaging/ReturnAddress.html and www.enterpriseintegrationpatterns.com/patterns/messaging/RequestReply.html describes the pattern. See aws.amazon.com/blogs/compute/simple-two-way-messaging-using-the-amazon-sqs-temporary-queue-client/ to know how to do this with SQS.

  • @rambo4014
    @rambo4014 2 года назад

    Isnt this similar to what async/await is for?

  • @TheGrammz
    @TheGrammz 3 года назад

    Hi Mark i will need to disagree on the terms with you for the fact that this pattern is not event driven. Looking at the definition of events an event is a signal published (with no recepient in mind) when a component reaches a certain state, events dont ask for things they inform. This is message driven but not event driven.

    • @markrichards5014
      @markrichards5014  3 года назад +4

      Back about 3 years ago when I recorded that video I had just started the series, and was planning on categorizing all of the lessons (those related to event-driven architecture, enterprise architecture, integration architecture, and so on). I abandoned that idea after the first couple of lessons. I tend to avoid definitive definitions within this industry because everyone seemed to have a different definition and we end up playing the semantics game (e.g., what is an event, what is service-oriented, what is enterprise architecture, etc.). Event-driven architecture encompases many different things, including messaging as a protocol - and yes, there are times when in fact you do need a response from an event (what you are calling a command). There are several ways to do this - request/reply as I've shown here, or receive a response event on a different channel (usually reserved for longer running events or ones that don't require an immediate response). This video was not meant to "define" event-driven architecture, but merely to show one of the processing mechanisms within this broad, ambiguities thing we call event-driven architecture. So, call this message-driven if you would like within an event-driven architecture.

  • @ruixue6955
    @ruixue6955 2 года назад

    1:12

  • @arthurlamy5535
    @arthurlamy5535 3 года назад

    I wouldn't call this "Event Oriented". It's plain old good "Service Oriented Integration". It's just "async service with an answer". It is similar misconception to that coined by old dictum - "SOA is not technology / ESB / whatever". Event orientation is completely different mindset. In this case you produce a Command not a Request and receive an Event - not a Response. The difference is that event is supposed to be read be everybody eligible not just by initiator... and lots more. In this simple example it might look similar from programmer's point of view. However the real difference is how designers conceptualize problem at hand. Event oriented implementation than comes naturally.

    • @markrichards5014
      @markrichards5014  3 года назад +6

      Back about 3 years ago when I recorded that video I had just started the series, and was planning on categorizing all of the lessons (those related to event-driven architecture, enterprise architecture, integration architecture, and so on). I abandoned that idea after the first couple of lessons. I tend to avoid definitive definitions within this industry because everyone seemed to have a different definition and we end up playing the semantics game (e.g., what is an event, what is service-oriented, what is enterprise architecture, etc.). Event-driven architecture encompases many different things, including messaging as a protocol - and yes, there are times when in fact you do need a response from an event (what you are calling a command). There are several ways to do this - request/reply as I've shown here, or receive a response event on a different channel (usually reserved for longer running events or ones that don't require an immediate response). This video was not meant to "define" event-driven architecture, but merely to show one of the processing mechanisms within this broad, ambiguities thing we call event-driven architecture.