Kafka Tutorial Rebalance Listener

Поделиться
HTML-код
  • Опубликовано: 11 янв 2017
  • Spark Programming and Azure Databricks ILT Master Class by Prashant Kumar Pandey - Fill out the google form for Course inquiry.
    forms.gle/Nxk8dQUPq4o4XsA47
    -------------------------------------------------------------------
    Data Engineering using is one of the highest-paid jobs of today.
    It is going to remain in the top IT skills forever.
    Are you in database development, data warehousing, ETL tools, data analysis, SQL, PL/QL development?
    I have a well-crafted success path for you.
    I will help you get prepared for the data engineer and solution architect role depending on your profile and experience.
    We created a course that takes you deep into core data engineering technology and masters it.
    If you are a working professional:
    1. Aspiring to become a data engineer.
    2. Change your career to data engineering.
    3. Grow your data engineering career.
    4. Get Databricks Spark Certification.
    5. Crack the Spark Data Engineering interviews.
    ScholarNest is offering a one-stop integrated Learning Path.
    The course is open for registration.
    The course delivers an example-driven approach and project-based learning.
    You will be practicing the skills using MCQ, Coding Exercises, and Capstone Projects.
    The course comes with the following integrated services.
    1. Technical support and Doubt Clarification
    2. Live Project Discussion
    3. Resume Building
    4. Interview Preparation
    5. Mock Interviews
    Course Duration: 6 Months
    Course Prerequisite: Programming and SQL Knowledge
    Target Audience: Working Professionals
    Batch start: Registration Started
    Fill out the below form for more details and course inquiries.
    forms.gle/Nxk8dQUPq4o4XsA47
    --------------------------------------------------------------------------
    Learn more at www.scholarnest.com/
    Best place to learn Data engineering, Bigdata, Apache Spark, Databricks, Apache Kafka, Confluent Cloud, AWS Cloud Computing, Azure Cloud, Google Cloud - Self-paced, Instructor-led, Certification courses, and practice tests.
    ========================================================
    SPARK COURSES
    -----------------------------
    www.scholarnest.com/courses/s...
    www.scholarnest.com/courses/s...
    www.scholarnest.com/courses/s...
    www.scholarnest.com/courses/s...
    www.scholarnest.com/courses/d...
    KAFKA COURSES
    --------------------------------
    www.scholarnest.com/courses/a...
    www.scholarnest.com/courses/k...
    www.scholarnest.com/courses/s...
    AWS CLOUD
    ------------------------
    www.scholarnest.com/courses/a...
    www.scholarnest.com/courses/a...
    PYTHON
    ------------------
    www.scholarnest.com/courses/p...
    ========================================
    We are also available on the Udemy Platform
    Check out the below link for our Courses on Udemy
    www.learningjournal.guru/cour...
    =======================================
    You can also find us on Oreilly Learning
    www.oreilly.com/library/view/...
    www.oreilly.com/videos/apache...
    www.oreilly.com/videos/kafka-...
    www.oreilly.com/videos/spark-...
    www.oreilly.com/videos/spark-...
    www.oreilly.com/videos/apache...
    www.oreilly.com/videos/real-t...
    www.oreilly.com/videos/real-t...
    =========================================
    Follow us on Social Media
    / scholarnest
    / scholarnesttechnologies
    / scholarnest
    / scholarnest
    github.com/ScholarNest
    github.com/learningJournal/
    ========================================

Комментарии • 60

  • @ScholarNest
    @ScholarNest  3 года назад

    Want to learn more Big Data Technology courses. You can get lifetime access to our courses on the Udemy platform. Visit the below link for Discounts and Coupon Code.
    www.learningjournal.guru/courses/

  • @nemosourav
    @nemosourav 5 лет назад +8

    I want to extend my sincere thanks for this amazing course. I have been following your kafka playlist and being a kafka-newbie, this course has been amazing. Again, many thanks for your efforts.

  • @palasuresh1987
    @palasuresh1987 5 лет назад +3

    Thanks a lot for creating this playlist and making most of the people lives easy to understand the Kafka. I would say it is one of the finest explanations to understand the kafka starting from the scratch.

  • @peterabiodunokusolubo1541
    @peterabiodunokusolubo1541 6 лет назад +1

    This is a fantastic tutorial, I've been looking for this explanation for some time. Thanks

  • @kumarvairakkannu360
    @kumarvairakkannu360 7 лет назад +1

    This is fantastic, loved your presentation! Also this video cleared my previous doubt...Thanks a million!!!

  • @niftymiller6057
    @niftymiller6057 5 лет назад +1

    Amazing videos, way you present to audience, the flow is unbeleivable i cannot thank you more

  • @StrongbowJava
    @StrongbowJava 4 года назад +1

    Thank you very much for your perfect explanation, which helps me a lot to understand the complex topic of partition assignment and revocation. I like your slow and clear voice, so that I could follow you without a problem. Keep up the good work.

  • @nodeflowTrader
    @nodeflowTrader 6 лет назад

    good job in explaining kafka in this way - straight to the point I like it :)

  • @anumsheraz4625
    @anumsheraz4625 6 лет назад +1

    Best tutorial I found so far. Thank you so much Sir for sharing your knowledge.

  • @DavidFraserable
    @DavidFraserable 3 года назад

    These vids are amazing. You rock!

  • @nooruskhan5200
    @nooruskhan5200 5 лет назад

    This is really a nice tutorial to start with kafka, thanks for all your effort

  • @UnicornTwichu
    @UnicornTwichu 5 лет назад

    Sir you have perfectly represented and explained all.

  • @SatishKumar-ix9mz
    @SatishKumar-ix9mz 6 лет назад

    Thanks a lot !! Excellent tutorial ..

  • @vikramvarshney3224
    @vikramvarshney3224 3 года назад

    I am working on Kafka 0.9.0 last 1 year and facing the problem of rebalancing by the coordinator while consumer process is no longer be running. While I restart consumer process it is taking rebalancing always. Now I have exactly understand how to commit offsets properly. Thanks a lot for sharing the valuable knowledge.👍🏻

  • @naveenkumarmurugan1962
    @naveenkumarmurugan1962 5 лет назад

    Great work sir.... God bless you..

  • @chaitanyatanwar8151
    @chaitanyatanwar8151 2 года назад

    Thank you!

  • @bhanud1806
    @bhanud1806 7 лет назад

    Your Complete tutorial is very good and clear to follow. Thanks a lot for sharing us this info :)

  • @cellisisimo
    @cellisisimo 7 лет назад

    Very clear and concise explanation of Kafka!! I cannot wait for other tutorials. Thanks!!

    • @ScholarNest
      @ScholarNest  7 лет назад +1

      Thanks a lot for the feedback, I am working and committed to at least one every week.

  • @adershrp
    @adershrp 5 лет назад

    Thank you.

  • @vineettalashi
    @vineettalashi 5 месяцев назад

    Great video

  • @TheMcallist1
    @TheMcallist1 3 года назад

    Brilliant explanation - thanks

  • @gagangupta1255
    @gagangupta1255 3 года назад

    Simply awesome video explaining the rebalancing

  • @glpathy
    @glpathy 4 года назад

    Thank you sir for a practical explanation with a problem statement and a resolution.

  • @mohanbabubanavaram5211
    @mohanbabubanavaram5211 3 года назад

    Excellent Explanation

  • @rajareddy47444
    @rajareddy47444 6 лет назад

    Hi,Thank you sir for sharing your knowledge.The way u take the concept and explaining is awesome. After watching your spark videos i got the confidence that i can face interviews with more confidence .Thanks for showing way how to do in real time using GCP.
    Now i started learning kafka parallel with spark. These videos are in terminal.Can you please explain kafka also in real time with GCP. That will be a great advantage for those who are moving to this ecosystem. Thank you

  • @avismg222
    @avismg222 7 лет назад

    Very good explanation..awesome..

    • @ScholarNest
      @ScholarNest  7 лет назад

      Thanks a lot for your encouragement.

  • @deepaksoundappan3244
    @deepaksoundappan3244 5 лет назад

    Greate work Sir!!!!. This is great material is saving Tons of money to people who will have to other wise take stupi course in Edureka or other sites :)

  • @venkatakrishna5222
    @venkatakrishna5222 Год назад

    Hi Sir, your Kafka teachings are fantastic. Can you please teach on Spring Kafka. Thank you Sir 🙏

  • @imochurad
    @imochurad 6 лет назад

    Hi, thanks for the great video. I have the following question here: you are giving an example of how to make processing of a single message and commit an offset as a single atomic operation. This is easily achieved in your case since you are storing data in the same database. So you are utilizing JDBC transaction.
    What should I do if I have to consume a message and then I am supposed to POST it to the external API instead of saving it in the database? If my POST to the API was successful and my consumer crashed between storing offset information, I think my message might be processed twice.

  • @TejpalSingh-qe5ct
    @TejpalSingh-qe5ct 5 лет назад +1

    @Learning Journal Shouldn't we use ConcurrentHashMap as multiple consumers will try to update the map simultaneously?

  • @virenmathur
    @virenmathur 6 лет назад

    Hi,
    This is a very nice series for Kafka.Probably the best. Thanks for putting it together.
    Wanted to get clarity - The commit in the consumer code is commented out. Is it because you want to show how rebalancing works? So had two questions for you
    1. How the commit will happen when next loop comes around. Would you uncomment the consumer.commitSync(.......) ?
    2. Do I need to write additional code in the rebalance listener to clear out the offset map. My understanding would be that the Map would get overwritten.
    Please suggest.
    Thanks
    Viren

  • @venkatakrishna5222
    @venkatakrishna5222 Год назад

    Hi Sir, Thank you so much for the excellent explanation. I have few doubts.
    1. Can rebalance possible in single partion, single consumer scenario. If possible do we need to implement rebalance listener for this scenario.
    2. Can we statically bind that partition to that consumer.
    3. Can we disable rebalance listener.
    Thank you very much sir 🙏

  • @theashwin007
    @theashwin007 7 лет назад

    Superb! I think this is THE BEST tutorials on KAFKA. When I searched few months back, there were no this clear recordings.
    Thanks for getting this to the community.
    I have one question. Can we configure how many records Consumer can read in one poll ? I mean say there are 1000 records at the broker level and when you say number '20' records returns in your example, I hope that is a max-configured one?
    Like several producer configs, can you please explain different kind of polls at the consumer side?

    • @ScholarNest
      @ScholarNest  7 лет назад

      We try to control max number of records by putting various limits like max.poll.records and passing a timeout parameter to poll method. The poll method tries to get you as many as possible within those limits.

  • @PA-vf5st
    @PA-vf5st 3 года назад

    Sir Fantastic tutorial and superbly explained....One question Can we not commit partition passed in onPartitionsRevoked method to avoid commtfailedexception..?

  • @ashokmacherla4573
    @ashokmacherla4573 5 лет назад

    How to calculate number partitions are required for topic??

  • @aravinthanm3183
    @aravinthanm3183 5 лет назад

    Sir how to clear offset value. pls help me

  • @rikuntri
    @rikuntri 7 лет назад

    For starting KAFKA we need to start zookeeper.but in cloudera VM i think there is zookeeper instance already running .SO can anyone tell me how to check through VM terminal whether the default instance is runnig or not ? Also if we start a zookeeper service we can see the logger screen but how to check it for cloudera VM.

  • @ragavkb2597
    @ragavkb2597 6 лет назад

    Once onPartitionsRevoked() call back is received, I understand we are trying to commit the offset. I think we should also stop processing the remaining messages, am i correct ? Suppose during the last poll() we received 100 messages and at the time of processing 51th message we received onPartitionsRevoked() callback and we would call commitSync that we processed 50 messages so far. Messages from 51 through 100 may get processed by the newly assigned consumer, won't that end up in duplicate processing of records? Thanks

  • @vinayyelleti2714
    @vinayyelleti2714 3 года назад

    Can u pls explain how can we achieve the same when @kafkalistner is used

  • @rickytik-devops
    @rickytik-devops 3 года назад

    so your saying the kafka can either be a producer/sender or consumer/reciever?

  • @Satyakam85
    @Satyakam85 6 лет назад

    Thanks Mr. LJ (unfortunately I haven't figured out your name yet :( ) for the great tutorials on Kafka. I have a query which is bothering me for a while. Is it a good practice to use the following for processing of records:
    1. Use JMS to which Consumer can drop messages (records from Kafka Broker) for an actual end application to process gracefully. That way one could atleast commit offsets without the need to wait long ?
    2. Use another Kafka Broker (may be a separate instance for processing of records). So basically pull stream of data from a broker and push to another broker. I know sounds crazy but is it used anywhere ?

    • @ScholarNest
      @ScholarNest  6 лет назад

      Let me understand your question.1. You want to use JMS as a buffer because you think processing message will take a long time and we have to wait for that before we commit. Are you sure? I guess, it will complicate the solution and I don't see a reason.
      2. I guess you mean pull from one topic, process it and push to another topic ( you didn't mean - from one broker to another broker). I wonder, why would you need another Kafka Cluster.

    • @Satyakam85
      @Satyakam85 6 лет назад

      Thanks sir. I guess I get when you say JMS would complicate the solution as we are ideally going from distributed messaging to single point messaging.
      As for point 2, it was just a wild thought. However do we have topic to topic routing in Kafka ?

  • @UnstoppableAadhya
    @UnstoppableAadhya 4 года назад

    I am writing a service to retrieve past messages of a topic and trying to show.Can you please help me which approach is better choice for this.

  • @banam3540
    @banam3540 3 года назад

    Hi, I have a situation. I have multiple consumers on same topic. One of consumer received 10 events to process. After processing 6 event all of the sudden consumer goes down. I am using auto commit. Now my problem is what is status of 4 items which is in consumer queue. Waht happens to current and commit offset. How can stop data loss situation here. Please help me.
    For ur inform we using Azure Function Kafka Trigger.

  •  7 лет назад

    at 8.18, in addOffset method, at the end, you write new OffsetAndMetadata(offset,"commit"); what does it mean? why you put a string "commit"?

    • @ScholarNest
      @ScholarNest  7 лет назад +1

      That's metadata. It could be any string,. You can use any non-null string instead of Commit.

    •  7 лет назад

      And what's the purpose? You're putting in the hashmap with key topicpartition a value of Offset and Metadata, right? but I don't understand where you use it and what could be useful for. Thank you :)

  • @hadoopworld35
    @hadoopworld35 7 лет назад

    Hi Sir..
    Thanks for this tutorial.
    I have one doubt , How we will do it if we dont know the partition no...
    what would be the addoffset() here ..can we write TopicPartition only with topic argument

    • @hadoopworld35
      @hadoopworld35 7 лет назад

      Ok...So We can not create topic without partitions

  • @vaibhavbacchav2674
    @vaibhavbacchav2674 5 лет назад

    And this re-balancing won't happen if I add a producer which starts writing to the same topic from which a consumer group is reading (no addition/subtraction of consumer when producer got added) ?

    • @ScholarNest
      @ScholarNest  5 лет назад

      Rebalancing has nothing to do with the producer.

  • @vboilay
    @vboilay 6 лет назад

    your tuto are excellent. let me know how I can reward you . tipee?

    • @ScholarNest
      @ScholarNest  6 лет назад +4

      Your appreciation is my reward.

  • @vaibhavbacchav2674
    @vaibhavbacchav2674 5 лет назад

    One small doubt sir, Is re-balancing activity local to the consumer group? Ex. Suppose we have 2 consumer gropus CG1 & CG2 and If consumer gets added/removed to/from CG1, it will trigger re-balancing only for CG1 or it will trigger re-balancing for CG2 also
    PLEASE REPLY ASAP, thanks alot
    GREAT TUTORIALS, hatsoff!