Microservice Pitfalls: Solving the Dual-Write Problem | Designing Event-Driven Microservices

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024

Комментарии • 15

  • @ghoshsuman9495
    @ghoshsuman9495 2 месяца назад

    What occurs if the Change Data Capture (CDC) process attempts to write to Kafka, but Kafka is unavailable? how to write the retry logic ?

  • @soloboy118
    @soloboy118 2 месяца назад

    hell. I'm working as a Kafka admin. recently I have faced one issue that, for few topics (each topic has 6 partitions) one partition end offset become zero. how to resolve this. it is in production pls help

  • @AxrelOn
    @AxrelOn 2 месяца назад

    Great video, thanks! In case of outbox pattern usage the outbox records should be removed once kafka producer received ack, but could batching prevent that? Should batching be disabled in this case?

  • @Fikusiklol
    @Fikusiklol 2 месяца назад

    Hello, Wade!
    Wanted to ask about CDC with ES option.
    From my perspective publishing events (domain/internal) is wrong, as they might not contain enough data and consumers would be coupled to internal contracts (there are ways to avoid it, but just for simplicity sake).
    Did you mean somehow reading domain event, transform it to integration (ESCT/fat) and then publish using CDC?
    Cause ive been doing ES and still used Outbox table to fulfill that intergration event with metadata.

    • @sarwan.surchi
      @sarwan.surchi 2 месяца назад

      @Fikusiklol why not storing a mature domain event where a process could pick it up without additional Outbox table?
      Although the Outbox is great in controlling what should be emitted, it just degrades perf of writes a little.

    • @Fikusiklol
      @Fikusiklol 2 месяца назад

      @@sarwan.surchiBecause mature domain event doesnt care about metadata for integration, which could be captured only during request, like tracing, userid, timespan, etc.
      Also, im not a fan of slim/notification events for integration as they are not sufficient.

  • @petermoskovits8470
    @petermoskovits8470 3 месяца назад

    At 1:25 you cover in great detail how to address the problem when the Kafka write fails and the DB write succeeds. How about the other way around? What if the Kafka write succeeds, and the DB write fails?

  • @MrGoGetItt
    @MrGoGetItt 3 месяца назад

    Exceptional content delivery! Not only were you articulate, but the visuals were an excellent aid. Great work

  • @darwinmanalo5436
    @darwinmanalo5436 3 месяца назад

    So instead of manually sending events to Kafka, we save the events to the database first. Then, there is a CDC tool that detects updates and automatically sends them to Kafka?
    Another tool adds another layer of complexity. Event Sourcing is quite complex, so people should carefully consider if it's the right tool for the project before implementing it. I wish these inconsistencies/issues were already solved in Kafka itself, not by us.
    P.S. The presentation is well-explained though. Wade is a good teacher.

    • @darwinmanalo5436
      @darwinmanalo5436 3 месяца назад

      @@ConfluentDevXTeam I got your point. Thanks for the reply, Wade.

  • @BlindVirtuoso
    @BlindVirtuoso 3 месяца назад

    Nice one. Thanks Wade.

  • @ilijanl
    @ilijanl 3 месяца назад +1

    You can actually leverage legacy db transaction to publish to kafka with some tradeoffs. The flow can be following:
    1. Start transaction
    2. Insert into legacy db
    3. Publish to Kafka
    4. Commit
    If step 2 or 3 throws, nothing will be committed and the whole handler will fail, which can be retried later. If for some reason 2&3 succeed and 4 fails, you have published the event to kafka without storing to db however now you have atleast once for publishing.
    Tradeoff is ofcourse that your runtime has a dependency on Kafka, and if kafka is down, you never can succeed the transaction. However they say kafka is HA and high performance so the problem might be smaller than it seems.

    • @Fikusiklol
      @Fikusiklol 2 месяца назад +1

      That is not true - "now you have atleast once for publishing".
      It depends on:
      1. whether you have transient retries or not and those might not work either
      2. would user want to retry or not
      I'd say that fundamentally wrong and should be avoided for transactions that are important.
      For generic concerns like emails - whatever.
      All in all just ask your business people about it.

    • @ilijanl
      @ilijanl 2 месяца назад

      @@ConfluentDevXTeam you are correct if your handler doesn't have any retry logic, if it does, then the commit (step 4) will eventually succeed if you setup it correctly (doing idempotencgy etc). However like you mentioned in video I also prefer the outbox pattern