Apache Kafka® Brokers: Introduction to the Data Plane

Поделиться
HTML-код
  • Опубликовано: 4 авг 2024
  • cnfl.io/kafka-internals-101-m... | An Apache Kafka® broker is a server or a network of machines that works as a replacement for traditional message brokers.
    In this module, Jun Rao (Kafka Committer, PMC Member, VP of Kafka, and Co-Founder, Confluent), provides an overview of:
    - Apache Kafka® broker architecture and how it works
    - The request loop that produces and fetch requests cycle through as they are processed by the broker
    - How network threads and i/o threads are utilized to process requests
    - How request and response queues operate
    - How page cache is utilized as brokers write events to topic partition logs
    - The role purgatory plays as the broker waits for events to be replicated to other brokers
    Use the promo code INTERNALS101 to get $25 of free Confluent Cloud usage: www.confluent.io/confluent-cl...
    Promo code details: www.confluent.io/confluent-cl...
    LEARN MORE
    ► Apache Kafka Brokers: developer.confluent.io/learn-...
    ABOUT CONFLUENT
    Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion - designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven backend operations. To learn more, please visit www.confluent.io.
    #streamprocessing #apachekafka #kafka #confluent
  • НаукаНаука

Комментарии • 11

  • @datamonkey9468
    @datamonkey9468 2 месяца назад

    The most underrated playlist ever

  • @toenytv7946
    @toenytv7946 2 года назад

    Learnt lots. Great job. Thank you Confluent.

  • @deeperroot
    @deeperroot 2 года назад

    Great and clear technical details

  • @morgadoapi4431
    @morgadoapi4431 2 года назад

    Thanks for the video!

  • @user-rq5gc5uw8d
    @user-rq5gc5uw8d 4 месяца назад

    Awesome video!
    But in the end - it's the I/O thread that gets blocked as we fetch the data from the disk

  • @isravertiz
    @isravertiz 2 года назад

    Great Job Confluent Team, I have a not-so-smart question: When the buffer of bytes is not full and the consumer request has been moved to the purgatory, how does the fetch request know (Other than time) that the request can be fulfilled now? Is there some sort of bidirectional flag updating the thread to move the request back again to the Network Thread and then to the request queue?
    IVS

  • @smartdude876
    @smartdude876 Год назад +1

    i had to watch 4 times to understand the logic

  • @user-rq5gc5uw8d
    @user-rq5gc5uw8d 4 месяца назад

    It's the I/O thread that gets blocked when fetching the data from Disk

  • @TymexComputing
    @TymexComputing Год назад

    LZ4 is nice feature :) - XZ alas could be nice feature

  • @debabhishek
    @debabhishek 2 месяца назад

    @confluent just one question ? when you do write in the kafka, ( producer) . this goes in the socket receive buffer ok .. got it.. but in which broker instance.. as for the specific partition there is specific broker instance is assigned.. suppose I am writing to topic A in partition 1 and partition 2, , partition 1 and partition 2 are not handled by the same broker instance.. now .. how its works.. does where it goes there are 3 instances the Custer. for which instance socket buffer will receive it. ?

    • @DanicaFine
      @DanicaFine 2 месяца назад +1

      Hey there! Producers receive metadata from the brokers, so they know where each of the partitions they're writing to live. They use that metadata to assess which broker should receive the request with data for partition 1, 2, and so on.
      A given producer request will contain data that's destined (by the metadata) for the same broker. If there's are more messages destined for multiple partitions across multiple brokers, the producer will send multiple requests.