Understanding Kafka with Legos

Поделиться
HTML-код
  • Опубликовано: 14 янв 2025

Комментарии • 60

  • @jingluo8858
    @jingluo8858 7 лет назад

    I think this was a good foundational way of understanding kafka with just the right level of depth to be informative but not confusing. Thank you very much for making this!

  • @mr.w7803
    @mr.w7803 7 лет назад

    Thank you so much for this description!! I'm a front end developer working with a big data project. Generally I don't have a need to mess with Kafka, but our engineers are using it... so it's great to have a simple explanation so we can communicate a bit better.

  • @kengranger95
    @kengranger95 5 лет назад +5

    Should I be worried that I don't see a green paper?

    • @jessetanderson
      @jessetanderson  5 лет назад

      It is green, but the camera picked it up as yellow.

  • @AnandCRockzz
    @AnandCRockzz 7 лет назад

    partition, key based retention, batch consumption! I was looking for these and your video was a treat, thank you

  • @anilboppuri2067
    @anilboppuri2067 6 лет назад

    Appreciate your effort in explaining through Legos. Thank you Jesse

  • @bertjanbakker9497
    @bertjanbakker9497 7 лет назад +43

    Although I like the idea of simplifying the topic with lego, I find your use of the lego bricks is not much supporting the explanation compared to other means e.g. diagrams. Also I got the impression that you tried to explain many of the kafka concepts at once, instead of piece by piece.

    • @jessetanderson
      @jessetanderson  7 лет назад +10

      Tactile explanations like this aren't for everyone.

    • @cyberbobcat
      @cyberbobcat 7 лет назад +12

      I have the same feeling about it. Few notes to give you more specific feedback:
      * You swap names and correct yourself alot. It is confusing.
      * You play with the legos.. alot. We watch every move you make so when you separate the bricks and put them together again, it is like "unnecessary comment in code". You talk with the moves. Be brief.
      * Why do the producers send three bricks (use one and then take two more when data is replicated to consumers)? Why does Kafka break the three? Why do some of them stay on Kafka when they are already send to consumers?
      Everything you do has a meaning and should really fit the explanation. Which in your case I don't feel is the case.
      I thing there is bunch of people who hear Kafka and see Legos and blindly tell you you are awesome. Honestly. You are awesome. Thanks for your work! Here is the feedback.
      Cheers

    • @cyberbobcat
      @cyberbobcat 7 лет назад

      I have the same feeling about it. Few notes to give you more specific feedback:
      * You swap names and correct yourself alot. It is confusing.
      * You play with the legos.. alot. We watch every move you make so when you separate the bricks and put them together again, it is like "unnecessary comment in code". You talk with the moves. Be brief.
      * Why do the producers send three bricks (maybe, use one and then take two more when data is replicated to consumers)? Why does Kafka break the three and send separate pieces to consumers, does every consumer get just a bit of the message? Why do some of the data stay on Kafka when they are already send to consumers?
      Everything you do with your hands has a meaning and should really fit the explanation. Which here I don't feel is the case.
      I thing there is bunch of people who hear Kafka and see Legos and blindly tell you you are awesome. Honestly. You are awesome. Thanks for your work! Here is the feedback that I hope will help you.
      Cheers

    • @kennethcarvalho3684
      @kennethcarvalho3684 6 лет назад

      @@jessetanderson t feel this is is certainly a great way to teach kafka. My suggestion would be to bring in more cards to bring out the broker to topic to partition and then to consumer. Also the use of more blocks to show the ordering of data coming out in a fifo way. Thanks.

  • @Mr13forTaylor
    @Mr13forTaylor 8 лет назад

    Good one Jesse. Hope you have some Docker and Git ones in the making...

  • @grooveshelter
    @grooveshelter 8 лет назад

    would really love a clip like this that continues the talk but takes into account kafka node failover inside the cluster.

  • @itsmerajas
    @itsmerajas 7 лет назад

    Excellent and simple presentations! The Legos had a great impact when it came to consumer side failures / re-starts and kafka servers' data retention policies explanation!
    Thanks +Jesse Anderson ! :)

  • @MrYaxyzza
    @MrYaxyzza 6 лет назад +1

    That's a marvelous explanation! So nice and simple ^_^ Thank you so much

  • @vlasov01
    @vlasov01 5 лет назад

    Nice video. Could you please clarify compaction approach. You said that newest messages will be lost during compaction. I've read in other sources the opposite: "Log compaction retains at least the last known value for each record key for a single topic partition".

    • @jessetanderson
      @jessetanderson  5 лет назад +1

      You misunderstood what I was saying. I was saying there is newer message so the older message(s) will be lost in a compaction strategy.

    • @vlasov01
      @vlasov01 5 лет назад

      Hi Jesse, thank you for the clarification!

  • @jamesren4949
    @jamesren4949 5 лет назад

    sorry to look over your shoulder, but why would black lego message (or topic?) be placed into Kafka and consumer at the same time? 11:45. Should they be consumed and thus removed off kafka brokers?

    • @jessetanderson
      @jessetanderson  5 лет назад

      That's one of the big differences with Kafka. All data is retained until it is deleted by the broker, usually days later. So no, data doesn't get removed after consumption.

  • @josethomas9451
    @josethomas9451 6 лет назад

    Great Explanation. I was waiting for the explanation of the total ordering of messages. If I understood it correctly, a producer will always publish to the same partition. Thus keeping the order of message. If this true new partition is created as the publisher count is increasing.

    • @jessetanderson
      @jessetanderson  6 лет назад

      Jose Thomas the partition a publisher will send to only changes if the number of partitions changes. Some caveats to this, but that's the general rule.

  • @Edu4Dev
    @Edu4Dev 4 года назад

    AMAZING POV !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  • @krisatutube
    @krisatutube 7 лет назад +4

    Thank you for the informal explanation,really helps.

  • @TimothyPeters
    @TimothyPeters 5 лет назад

    Big thanks to you.

  • @ashutoshtewari7035
    @ashutoshtewari7035 6 лет назад

    Thanks for the great video. It really helps

  • @onurtosyaloglu2002
    @onurtosyaloglu2002 6 лет назад

    very clear, helpful, thanks

  • @praveensrinivasan3387
    @praveensrinivasan3387 8 лет назад

    Thank you sir,Very Good Explanation!!!

  • @nocoty1316
    @nocoty1316 8 месяцев назад

    why you kepp the camera behind your back

    • @jessetanderson
      @jessetanderson  8 месяцев назад

      Because I wanted it to be as if you're looking over my shoulder while I teach you something.

    • @nocoty1316
      @nocoty1316 8 месяцев назад

      @@jessetanderson kinda cool but it's 1st time I see sth like this

  • @uidx-bob
    @uidx-bob 8 лет назад

    Super job loved it!

  • @unnamed....8522
    @unnamed....8522 6 лет назад

    Awesome one

  • @brunoju8894
    @brunoju8894 5 лет назад

    great intro~

  • @robisonkarls
    @robisonkarls 6 лет назад

    wow, thank you, you clarified so many concerns that i had :D

  • @willwang7207
    @willwang7207 8 лет назад +2

    if consumer 1 is down, when it come back, i understand it can get correct order of data from kafka 1 or kafka 2, but how about the data combine together from kafka 1 & 2, are they still in the same order if the consumer never went down?

    • @jessetanderson
      @jessetanderson  7 лет назад +1

      Ordering is guaranteed at a per partition level. If the consumer is up or down, the order will be the same per partition.

  • @KernelFault
    @KernelFault 7 лет назад

    Thank you, that was helpful.

  • @DinuBee
    @DinuBee 6 лет назад

    This seems pretty much similar to IBM MQ pub sub model.. how do you handle the data input and output data format, as in is it possible to transform and translate it just like IBM message broker??

  • @saturnringskhan
    @saturnringskhan 8 лет назад +3

    Good Explanantion

  • @naveen-ib5ly
    @naveen-ib5ly 6 лет назад

    perfect bro.

  • @svdfxd
    @svdfxd 5 лет назад

    Different but nice explanation

  • @nazeerhussain6650
    @nazeerhussain6650 8 лет назад

    Excellent !!

  • @lztverygood
    @lztverygood 5 лет назад

    good job, but green and yellow are not a good idea for bad screens

  • @TechInnovatorFor22ndCentury
    @TechInnovatorFor22ndCentury 5 лет назад

    thanks!!

  • @SabarishChandrasekharan
    @SabarishChandrasekharan 7 лет назад

    simple and effective

  • @deba0077
    @deba0077 9 лет назад

    Pretty much easy way to understand

  • @machocoding7858
    @machocoding7858 7 лет назад

    so is it JMS?

    • @jessetanderson
      @jessetanderson  7 лет назад

      macho coding no

    • @machocoding7858
      @machocoding7858 7 лет назад

      EMS?

    • @jessetanderson
      @jessetanderson  7 лет назад

      macho coding we could play 20 questions or you could read the Wikipedia article.

    • @machocoding7858
      @machocoding7858 7 лет назад

      sorry, but I see is alot similar to Tibco's EMS, publish-subscribe, FT, load-balancing... I guess the difference is the open-source

    • @JamesTromans
      @JamesTromans 7 лет назад

      They're fundamentally different in approach

  • @bunnihilator
    @bunnihilator 6 лет назад

    Top

  • @MH-oc4de
    @MH-oc4de 5 лет назад

    Then this producer, excuse me consumer, excuse me cockroach ... While I appreciate this attempt at clarifying/explaining kafka, it only reinforces for me that kafka and every layer on top of it (kafka streams, faust, scoobydoo) are not super well thought out solutions to any problems. If these were well thought out and *necessary*, it would be much easier to explain how they work. Harrumph!