just started watching your video of kafka today at 10 am morning. I couldn't stop myself and its 2 am next day just completed your all videos of kafka with hands on. Just one suggestions keep your kafka consumer topic in kafka playlist otherwise many can miss tracking this .Also i'm waiting for other kafka topics like kafka connect with database where messages are being received from different source system which kafka server listen and stores those in our database and vice versa
Great video! Explains how Kafka Consumers work pretty clearly. As an idea for the next video/s, you can go over more of the properties of Kafka for Producers/Consumers - and explain concepts like batching, the most common properties that can be configured (like manual acknowledgement or not) etc
thank you for your tutuorial easy to understand but i have question is it possible to miss data in kafka between producer and consumer process if it likes that how can i avoid missing data
Nice explanation. Can you cover in upcoming videos like how can we replay or retry lagged messages and how to identify which messages lagged and how to replay those .
Hi Basant, thank you for an excellent informative video. What I do not understand about the lag is, if Kafka stores the messages for minimum one week, then why do we have to republish the messages which are in the lag. Why it will not automatically be reprocessed when the consumer is back up and running?
Good Explanation basant. Great work. I have one quick question you change the group id in the java class but not changed in yml still application works fine. Then what is the point of adding the group id in yml.
Hey, 1st of all thank you for the great tutorial. I've one question. When you change the group-id in service class, is it not mandatory to change or add that new group-id in application.yml ?
thank you for the video. can please tell me what's will happen if you start kafka-consumer-example project twice ? how many consumer will have ? 3 consumer in each instance ?
The example is clear and I like it , but these 3 consumers who are listening to the same topic and working together , could call the same method of the logica layer without problems ?
can we have multiple consumer groups with same groupid and reading same topic ? can anyone share answer and also i need all kafka realated questions .....Java Techie is very good and also i need one video for all kafka questions and answers experienced level
Hi Basant !! Can you please all the necessary videos of this particular Kafka series to a RUclips Playlist. It will make it easy to access all relevant videos from one place. And as and when you create new videos, you can add them to the playlist. Thanks!
Thanks sir for making useful videos for us. But I request you very respectfully to work in a project in spring boot having at least 6 entities/tables using spring boot in the back end and React/Angular/Next in the front end
Hi Basant , I was going through this Kafka tutorial, I got a question on groupid that you have assigned to multiple consumers. My question is can we provide different groupid to different consumers?
Mesage consumption is decided by leader broker and not by the zookeeper. Zukeeper will be keeping the status of brokers and act as coordinator for all the brokers. Am I wrong Basant?
Hi brother, Good Evening, i would like to join your - Spring boot & Micro service sessions. so Kindly guide me on that, How to register and talk to you. Thanks!
Hello buddy please checkout any video description I have shared the course link and coupon code for course but still if you are getting confused then please refer to this video ruclips.net/video/84KZa4dFLTo/видео.html
@@Javatechie I mean if that consumer stopped nd again restarted that consumer will continue to consume the data based on offset. Is that not possible. Like in real-time producer produced data and stored in topic and consumer should consume data and put that in database if consumer server stopped in the middle and again again restarted can't that consumer group continue to consume the data based on offset? It is my question
Offset Management: 1. Kafka maintains the offset for each consumer group. When a consumer reads messages from a topic, it keeps track of the offset of the last message it has processed for each partition. The offset is stored in a Kafka topic called __consumer_offsets. Kafka brokers manage the offset commits and offsets are periodically committed by the consumer group coordinator. Consumer Restart Behavior: 2. When a consumer is stopped or goes down, the next time it restarts, it will retrieve its last committed offset for each partition from the __consumer_offsets topic. Upon restarting, the consumer will begin to consume messages from the topics starting at the last committed offset for each partition. This ensures that the consumer continues from where it left off, processing messages that were not yet consumed before the shutdown
how can we fetch all the messages from the particular topic after hitting end-point url ?..Help me with this..I tried To do with the help of KafkaConsumer consumer .. this has a method poll .. but after hitting end point i'm getting null in consumer..please help me with this ..
How to write consumer instances dynamically in a single method...for every consumer instance you have created one method but only one method should multiple consumer instances naa?
Hello, Recently I faced a question in interview, if I give a java project along with remote server without any IDE how you run that project. Can you make a video on it
Depends, there's no single answer here, first of all how is this Java application packaged, jar, war or something else and then you have ensure that necessary dependencies are bundled and use appropriate command based on how the application is packaged and if it's not packaged you just run the main class 😅
Hi can you make video for how kafka handle duplicates events or messages. What if producer send same events/message twice and how consumer will acknowledge the duplicates?
How does your Producer send messages in the Queue is most important. If you send messages in a sequential way using some ID like OrderId then the message content may be duplicated but atleast the order is not. Resending messages is required if original previous message were not processed properly. There is Transaction API that supports Once Processing. Alternative is to have a Dedupe check on Consumer. But why are you insisting on non to allow duplicate messages is to be answered by you.
@rakushhkar4225 Recently, in an interview, I got this question. The interviewer asked me what happens when an exception occurs during message processing on the consumer side. How will the consumer process the failed messages? I am a beginner with kafka. If you have some videos tutorial or sameple codes, please suggest.
@@siddharthanepal1962 Also better get RabbitMQ installed to see the console where all messages are processed and stored and the parameter D is very important. Work on Stream and Quorum messages in detail too along with Classic type
There is an Idempotency property along with producer that helps here to avoid duplicates message process. Just shared a point of knowledge. If this is enabled broker will ignore the repetative message for uniqueness.
@javatechie i need your help to complete a work. it is very urgent i'm subscribing a topic and i need to interpret the topic message and save it dynamodb
Got to understand kafka by following your tutorials. So clear explanation!! You are great😇
just started watching your video of kafka today at 10 am morning. I couldn't stop myself and its 2 am next day just completed your all videos of kafka with hands on. Just one suggestions keep your kafka consumer topic in kafka playlist otherwise many can miss tracking this .Also i'm waiting for other kafka topics like kafka connect with database where messages are being received from different source system which kafka server listen and stores those in our database and vice versa
Waoo super excitement.thanks buddy i will definitely cover all concepts of Kafka
I like your videos
Great topic, we are using this in my billing project
Thanks
Thank you buddy 😊
No words. Awesome lecture !!!...
Nice Basant, Keep going with this series...
Thanks for the amazing tutorial!
Great video!
Explains how Kafka Consumers work pretty clearly. As an idea for the next video/s, you can go over more of the properties of Kafka for Producers/Consumers - and explain concepts like batching, the most common properties that can be configured (like manual acknowledgement or not) etc
Thanks for the tips! That's what's on my queue 🤪
awesome explanation able to learn the whole very well. Thank you
you are awesome Basant
Very nice and helpful!
Glad it was helpful!
Excellent video!
thank you for your tutuorial easy to understand
but i have question is it possible to miss data in kafka between producer and consumer process
if it likes that how can i avoid missing data
We need to implement a retry i will cover that
Nice explanation. Can you cover in upcoming videos like how can we replay or retry lagged messages and how to identify which messages lagged and how to replay those .
Yes i will cover that
Could you please explain with router component calling producer and consumer logic
awesome
Hi Basant, thank you for an excellent informative video. What I do not understand about the lag is, if Kafka stores the messages for minimum one week, then why do we have to republish the messages which are in the lag. Why it will not automatically be reprocessed when the consumer is back up and running?
It will automatically be consumed by the consumer once it goes back to online. But republish required when you want to resend a particular events
Good Explanation basant. Great work. I have one quick question you change the group id in the java class but not changed in yml still application works fine. Then what is the point of adding the group id in yml.
Either or you can add buddy. I forgot to mention that in video
Hey, 1st of all thank you for the great tutorial. I've one question. When you change the group-id in service class, is it not mandatory to change or add that new group-id in application.yml ?
Great , If you can enlighten on the real time project prospective, it would be great
I will do one e2e project using microservice that time will cover that scenario
@@Javatechie Thank you 😊
thank you for the video. can please tell me what's will happen if you start kafka-consumer-example project twice ? how many consumer will have ? 3 consumer in each instance ?
Basant. How do you create this beautiful presentation ? Which tool do you use or animation ?
No animation buddy it's simple Microsoft power point
The example is clear and I like it , but these 3 consumers who are listening to the same topic and working together , could call the same method of the logica layer without problems ?
Yes no problem at all because message will not be duplicate
How i can create multiple consumer group, and consume message concurrent please explain
can we have multiple consumer groups with same groupid and reading same topic ? can anyone share answer and also i need all kafka realated questions .....Java Techie is very good and also i need one video for all kafka questions and answers experienced level
Hi Basant !! Can you please all the necessary videos of this particular Kafka series to a RUclips Playlist. It will make it easy to access all relevant videos from one place. And as and when you create new videos, you can add them to the playlist. Thanks!
Hi buddy it’s already there in Kafka for beginners playlist please check
Got it. Thanks!!
Hi,
Instead of creating new consumer groups, can we not increase threads count to 3 in one method using thread property?
I will cover that content buddy
Thanks sir for making useful videos for us.
But I request you very respectfully to work in a project in spring boot having at least 6 entities/tables using spring boot in the back end and React/Angular/Next in the front end
If I restart the consumer, will it continue to read the lag messages?
once the consumer is enabled again, will the lag values reach the consumer automatically?
Yes it will process automatically once consumer back to online
Hi Basant , I was going through this Kafka tutorial, I got a question on groupid that you have assigned to multiple consumers. My question is can we provide different groupid to different consumers?
Yes we can I just defined it to same consumer to just demonstrate partition and consumer relationship
Hi can you create videos on Kafka stream?
Mesage consumption is decided by leader broker and not by the zookeeper. Zukeeper will be keeping the status of brokers and act as coordinator for all the brokers. Am I wrong Basant?
No you are correct
If consumers get shut down while consuming, so if we start again the consumers , will it not take those messages which was not consumed….?
Yes he will start picking based on what last offset he read before
Hi brother, Good Evening, i would like to join your - Spring boot & Micro service sessions. so Kindly guide me on that, How to register and talk to you. Thanks!
Hello buddy please checkout any video description I have shared the course link and coupon code for course but still if you are getting confused then please refer to this video ruclips.net/video/84KZa4dFLTo/видео.html
how long the messages are store in kafka?
If the consumer is stopped then it should continue to consume the data based on the offset. is that correct understanding?
No if the consumer is shut down then what to consume and who will consume buddy?
@@Javatechie I mean if that consumer stopped nd again restarted that consumer will continue to consume the data based on offset. Is that not possible. Like in real-time producer produced data and stored in topic and consumer should consume data and put that in database if consumer server stopped in the middle and again again restarted can't that consumer group continue to consume the data based on offset? It is my question
Offset Management:
1. Kafka maintains the offset for each consumer group. When a consumer reads messages from a topic, it keeps track of the offset of the last message it has processed for each partition.
The offset is stored in a Kafka topic called __consumer_offsets. Kafka brokers manage the offset commits and offsets are periodically committed by the consumer group coordinator.
Consumer Restart Behavior:
2. When a consumer is stopped or goes down, the next time it restarts, it will retrieve its last committed offset for each partition from the __consumer_offsets topic.
Upon restarting, the consumer will begin to consume messages from the topics starting at the last committed offset for each partition.
This ensures that the consumer continues from where it left off, processing messages that were not yet consumed before the shutdown
Are any videos on kafka coming?
Yes error handling and retry
is it required to add consumer group ?
Yes it's mandatory steps to do
how can we fetch all the messages from the particular topic after hitting end-point url ?..Help me with this..I tried To do with the help of KafkaConsumer consumer .. this has a method poll .. but after hitting end point i'm getting null in consumer..please help me with this ..
What field you are getting null ?
@@Javatechie in the consumer.poll section..can we connect seperately on meet so I can show you there?..
@@saurabhmaurya6964 sure please drop me an email to javatechie4u@gmail.com
Hi, so we no need of define ProducerFactory, ConsumerFactory in our code
That will be my next video don't worry
How to write consumer instances dynamically in a single method...for every consumer instance you have created one method but only one method should multiple consumer instances naa?
Hello, Recently I faced a question in interview, if I give a java project along with remote server without any IDE how you run that project. Can you make a video on it
Depends, there's no single answer here, first of all how is this Java application packaged, jar, war or something else and then you have ensure that necessary dependencies are bundled and use appropriate command based on how the application is packaged and if it's not packaged you just run the main class 😅
Hi can you make video for how kafka handle duplicates events or messages. What if producer send same events/message twice and how consumer will acknowledge the duplicates?
Okay sure i will do that
How does your Producer send messages in the Queue is most important. If you send messages in a sequential way using some ID like OrderId then the message content may be duplicated but atleast the order is not. Resending messages is required if original previous message were not processed properly. There is Transaction API that supports Once Processing. Alternative is to have a Dedupe check on Consumer. But why are you insisting on non to allow duplicate messages is to be answered by you.
@rakushhkar4225 Recently, in an interview, I got this question. The interviewer asked me what happens when an exception occurs during message processing on the consumer side. How will the consumer process the failed messages? I am a beginner with kafka. If you have some videos tutorial or sameple codes, please suggest.
@@siddharthanepal1962 Also better get RabbitMQ installed to see the console where all messages are processed and stored and the parameter D is very important. Work on Stream and Quorum messages in detail too along with Classic type
There is an Idempotency property along with producer that helps here to avoid duplicates message process. Just shared a point of knowledge. If this is enabled broker will ignore the repetative message for uniqueness.
What about @EnableKafka annotation??
Not mandatory
Great video. Can you cover avro schema and retry mechanism and also how handle any exception when processing message consumption?
I will do that exception handling
Bro how to connect offset kafka
Sir audio is not available
It's available buddy please check your system configuration
It is not working for me can , the message producer worked fine but I got nothing on message consumer
Share your repository we will debug
🙏💯
consumer is not able to receive msg or connect to broker
@javatechie i need your help to complete a work. it is very urgent i'm subscribing a topic and i need to interpret the topic message and save it dynamodb
Are you using SQS ?
@@Javatechie I’m using Kafka
The GOAT
What?
It means Greatest Of All Time- shortly GOAT
bro zoom in lol the text is tiny at times