KEDA: Event Driven and Serverless Containers in Kubernetes - Jeff Hollan, Microsoft

Поделиться
HTML-код
  • Опубликовано: 21 ноя 2019
  • Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
    Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
    KEDA: Event Driven and Serverless Containers in Kubernetes - Jeff Hollan, Microsoft
    Event driven and serverless architectures are defining a new generation of apps. However, to take full advantage of the serverless benefits of event driven, your application needs to scale and react to those events instantly - scaling from zero to potentially thousands of instances. These events may come in the form of queue and Kafka messages, or events from a cloud provider like AWS SQS or Azure Event Hubs. KEDA 1.0 is an open sourced component created in partnership with Red Hat and Microsoft Azure that provides event driven autoscaling for your Kubernetes workloads. In this demo-filled session, learn how to get started with KEDA, how customers are using it to efficiently scale and run event-driven apps, and how everything from a simple container to a serverless function can integrate seamlessly and scale natively in an event-driven and Kubernetes world.
    sched.co/Uaa6
  • НаукаНаука

Комментарии • 7

  • @felipeozoski
    @felipeozoski Год назад +1

    This guy is so fun 😊

  • @Nib1ru
    @Nib1ru 4 года назад +3

    Awesome presentation ! Keda helped us to save a lot of money :)

  • @adrianthompson4915
    @adrianthompson4915 3 года назад

    Excellent Presentation, good stuff Jeff.

  • @darthvada42
    @darthvada42 4 года назад

    Thank you Incredible hulk!

  • @pengdu7751
    @pengdu7751 Год назад

    Great talk!

  • @DarraghJones
    @DarraghJones 4 года назад +2

    I'm a little bit confused why scaling on queue length is better than scaling on CPU. Shouldn't the CPU increase to 100% before the queue starts to backlog? At which point the HPA will scale out the deployment. If the queue is backlogging and the CPU isn't at 100% why would scaling out (i.e. adding more CPU) help?

    • @SnowmEVE
      @SnowmEVE 2 года назад

      The video processing example seems like a better use case for KEDA.
      Each message on the queue is an expensive task that needs lot of resources. You design an app that only needs to worry about processing one message and specify the resource requirements. KEDA creates a new kubernetes job per message to proactively scale on demand. This is better than scaling based on CPU utilisiation since each queue messages starts being processed sooner.