I think this was a good foundational way of understanding kafka with just the right level of depth to be informative but not confusing. Thank you very much for making this!
Thank you so much for this description!! I'm a front end developer working with a big data project. Generally I don't have a need to mess with Kafka, but our engineers are using it... so it's great to have a simple explanation so we can communicate a bit better.
Although I like the idea of simplifying the topic with lego, I find your use of the lego bricks is not much supporting the explanation compared to other means e.g. diagrams. Also I got the impression that you tried to explain many of the kafka concepts at once, instead of piece by piece.
I have the same feeling about it. Few notes to give you more specific feedback: * You swap names and correct yourself alot. It is confusing. * You play with the legos.. alot. We watch every move you make so when you separate the bricks and put them together again, it is like "unnecessary comment in code". You talk with the moves. Be brief. * Why do the producers send three bricks (use one and then take two more when data is replicated to consumers)? Why does Kafka break the three? Why do some of them stay on Kafka when they are already send to consumers? Everything you do has a meaning and should really fit the explanation. Which in your case I don't feel is the case. I thing there is bunch of people who hear Kafka and see Legos and blindly tell you you are awesome. Honestly. You are awesome. Thanks for your work! Here is the feedback. Cheers
I have the same feeling about it. Few notes to give you more specific feedback: * You swap names and correct yourself alot. It is confusing. * You play with the legos.. alot. We watch every move you make so when you separate the bricks and put them together again, it is like "unnecessary comment in code". You talk with the moves. Be brief. * Why do the producers send three bricks (maybe, use one and then take two more when data is replicated to consumers)? Why does Kafka break the three and send separate pieces to consumers, does every consumer get just a bit of the message? Why do some of the data stay on Kafka when they are already send to consumers? Everything you do with your hands has a meaning and should really fit the explanation. Which here I don't feel is the case. I thing there is bunch of people who hear Kafka and see Legos and blindly tell you you are awesome. Honestly. You are awesome. Thanks for your work! Here is the feedback that I hope will help you. Cheers
@@jessetanderson t feel this is is certainly a great way to teach kafka. My suggestion would be to bring in more cards to bring out the broker to topic to partition and then to consumer. Also the use of more blocks to show the ordering of data coming out in a fifo way. Thanks.
Excellent and simple presentations! The Legos had a great impact when it came to consumer side failures / re-starts and kafka servers' data retention policies explanation! Thanks +Jesse Anderson ! :)
Nice video. Could you please clarify compaction approach. You said that newest messages will be lost during compaction. I've read in other sources the opposite: "Log compaction retains at least the last known value for each record key for a single topic partition".
sorry to look over your shoulder, but why would black lego message (or topic?) be placed into Kafka and consumer at the same time? 11:45. Should they be consumed and thus removed off kafka brokers?
That's one of the big differences with Kafka. All data is retained until it is deleted by the broker, usually days later. So no, data doesn't get removed after consumption.
Great Explanation. I was waiting for the explanation of the total ordering of messages. If I understood it correctly, a producer will always publish to the same partition. Thus keeping the order of message. If this true new partition is created as the publisher count is increasing.
Jose Thomas the partition a publisher will send to only changes if the number of partitions changes. Some caveats to this, but that's the general rule.
if consumer 1 is down, when it come back, i understand it can get correct order of data from kafka 1 or kafka 2, but how about the data combine together from kafka 1 & 2, are they still in the same order if the consumer never went down?
This seems pretty much similar to IBM MQ pub sub model.. how do you handle the data input and output data format, as in is it possible to transform and translate it just like IBM message broker??
Then this producer, excuse me consumer, excuse me cockroach ... While I appreciate this attempt at clarifying/explaining kafka, it only reinforces for me that kafka and every layer on top of it (kafka streams, faust, scoobydoo) are not super well thought out solutions to any problems. If these were well thought out and *necessary*, it would be much easier to explain how they work. Harrumph!
I think this was a good foundational way of understanding kafka with just the right level of depth to be informative but not confusing. Thank you very much for making this!
Thank you so much for this description!! I'm a front end developer working with a big data project. Generally I don't have a need to mess with Kafka, but our engineers are using it... so it's great to have a simple explanation so we can communicate a bit better.
Should I be worried that I don't see a green paper?
It is green, but the camera picked it up as yellow.
partition, key based retention, batch consumption! I was looking for these and your video was a treat, thank you
Appreciate your effort in explaining through Legos. Thank you Jesse
Although I like the idea of simplifying the topic with lego, I find your use of the lego bricks is not much supporting the explanation compared to other means e.g. diagrams. Also I got the impression that you tried to explain many of the kafka concepts at once, instead of piece by piece.
Tactile explanations like this aren't for everyone.
I have the same feeling about it. Few notes to give you more specific feedback:
* You swap names and correct yourself alot. It is confusing.
* You play with the legos.. alot. We watch every move you make so when you separate the bricks and put them together again, it is like "unnecessary comment in code". You talk with the moves. Be brief.
* Why do the producers send three bricks (use one and then take two more when data is replicated to consumers)? Why does Kafka break the three? Why do some of them stay on Kafka when they are already send to consumers?
Everything you do has a meaning and should really fit the explanation. Which in your case I don't feel is the case.
I thing there is bunch of people who hear Kafka and see Legos and blindly tell you you are awesome. Honestly. You are awesome. Thanks for your work! Here is the feedback.
Cheers
I have the same feeling about it. Few notes to give you more specific feedback:
* You swap names and correct yourself alot. It is confusing.
* You play with the legos.. alot. We watch every move you make so when you separate the bricks and put them together again, it is like "unnecessary comment in code". You talk with the moves. Be brief.
* Why do the producers send three bricks (maybe, use one and then take two more when data is replicated to consumers)? Why does Kafka break the three and send separate pieces to consumers, does every consumer get just a bit of the message? Why do some of the data stay on Kafka when they are already send to consumers?
Everything you do with your hands has a meaning and should really fit the explanation. Which here I don't feel is the case.
I thing there is bunch of people who hear Kafka and see Legos and blindly tell you you are awesome. Honestly. You are awesome. Thanks for your work! Here is the feedback that I hope will help you.
Cheers
@@jessetanderson t feel this is is certainly a great way to teach kafka. My suggestion would be to bring in more cards to bring out the broker to topic to partition and then to consumer. Also the use of more blocks to show the ordering of data coming out in a fifo way. Thanks.
Good one Jesse. Hope you have some Docker and Git ones in the making...
would really love a clip like this that continues the talk but takes into account kafka node failover inside the cluster.
Excellent and simple presentations! The Legos had a great impact when it came to consumer side failures / re-starts and kafka servers' data retention policies explanation!
Thanks +Jesse Anderson ! :)
That's a marvelous explanation! So nice and simple ^_^ Thank you so much
Nice video. Could you please clarify compaction approach. You said that newest messages will be lost during compaction. I've read in other sources the opposite: "Log compaction retains at least the last known value for each record key for a single topic partition".
You misunderstood what I was saying. I was saying there is newer message so the older message(s) will be lost in a compaction strategy.
Hi Jesse, thank you for the clarification!
sorry to look over your shoulder, but why would black lego message (or topic?) be placed into Kafka and consumer at the same time? 11:45. Should they be consumed and thus removed off kafka brokers?
That's one of the big differences with Kafka. All data is retained until it is deleted by the broker, usually days later. So no, data doesn't get removed after consumption.
Great Explanation. I was waiting for the explanation of the total ordering of messages. If I understood it correctly, a producer will always publish to the same partition. Thus keeping the order of message. If this true new partition is created as the publisher count is increasing.
Jose Thomas the partition a publisher will send to only changes if the number of partitions changes. Some caveats to this, but that's the general rule.
AMAZING POV !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Thank you for the informal explanation,really helps.
Big thanks to you.
Thanks for the great video. It really helps
very clear, helpful, thanks
Thank you sir,Very Good Explanation!!!
why you kepp the camera behind your back
Because I wanted it to be as if you're looking over my shoulder while I teach you something.
@@jessetanderson kinda cool but it's 1st time I see sth like this
Super job loved it!
Awesome one
great intro~
wow, thank you, you clarified so many concerns that i had :D
if consumer 1 is down, when it come back, i understand it can get correct order of data from kafka 1 or kafka 2, but how about the data combine together from kafka 1 & 2, are they still in the same order if the consumer never went down?
Ordering is guaranteed at a per partition level. If the consumer is up or down, the order will be the same per partition.
Thank you, that was helpful.
This seems pretty much similar to IBM MQ pub sub model.. how do you handle the data input and output data format, as in is it possible to transform and translate it just like IBM message broker??
Good Explanantion
perfect bro.
Different but nice explanation
Excellent !!
good job, but green and yellow are not a good idea for bad screens
thanks!!
simple and effective
Pretty much easy way to understand
so is it JMS?
macho coding no
EMS?
macho coding we could play 20 questions or you could read the Wikipedia article.
sorry, but I see is alot similar to Tibco's EMS, publish-subscribe, FT, load-balancing... I guess the difference is the open-source
They're fundamentally different in approach
Top
Then this producer, excuse me consumer, excuse me cockroach ... While I appreciate this attempt at clarifying/explaining kafka, it only reinforces for me that kafka and every layer on top of it (kafka streams, faust, scoobydoo) are not super well thought out solutions to any problems. If these were well thought out and *necessary*, it would be much easier to explain how they work. Harrumph!