How Autoscaling Works In Kubernetes (And Beyond)? Kubernetes Tutorial

Поделиться
HTML-код
  • Опубликовано: 12 сен 2024

Комментарии • 68

  • @DevOpsToolkit
    @DevOpsToolkit  2 года назад +1

    How do you scale your apps and #Kubernetes clusters?

  • @akhil-ph
    @akhil-ph 2 года назад +3

    Thank you for this awesome video 👍, we all would like to see a video of HPA combined with Prometheus.

  • @viniciosantos
    @viniciosantos 2 года назад +3

    Great video as usual! This channel is very underrated

  • @hamstar7031
    @hamstar7031 2 года назад +5

    Great video on teaching and as a refresher for me on HPA and VPA!
    I would like to learn and understand how to utilize metrics from Prometheus as another means for the autoscaling use-case.

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +4

      It's coming... :)

    • @DrorNir
      @DrorNir 2 года назад +1

      @@DevOpsToolkit can't wait! I need it for a project like right now

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      @@DrorNir If everything goes as planned, that one should go live thrid Monday from now.

    • @hiteshsmit
      @hiteshsmit 4 месяца назад +1

      is the video made/available for - using Prometheus for custom metric monitoring and using it for HPA

  • @romankrut7038
    @romankrut7038 Год назад +1

    Hey, I want to leave my feedback. Your videos are very usefull and explanation is very good. Keep going man!

  • @adiavanth369
    @adiavanth369 2 года назад +2

    Very nice presentation as always. Looking forward to know hpa using custom metrics from prometheus

  • @iposipos9342
    @iposipos9342 3 месяца назад +1

    thanks for your videos, yes i would like to know to scale pods with HPA based on metrics in Prometheus. Thank you very much

    • @DevOpsToolkit
      @DevOpsToolkit  3 месяца назад

      I'm planning to release a video that explores different types of scaling on July 8.

  • @Levyy1988
    @Levyy1988 2 года назад +8

    Great video as always!
    I think that it would also be useful to introduce KEDA autoscaler along with Prometheus base HPA.
    I am using KEDA and it is working great (in my case with RabbitMQ) since I can scale from zero pods which is huge cost saving.

    • @arns9006
      @arns9006 2 года назад

      We do Keda + Karpenter .. Magic

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +1

      Yeah! KEDA is awesome.

    • @johnw.8782
      @johnw.8782 2 года назад +1

      Can I ask if you're using KEDA with GKE? I've had issues with intermittent metrics server availability. I love KEDA and want to use it, but it's def a blocker.

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      @@johnw.8782 I haven't used it in GKE just yet. So far, most of my experience with KEDA is on other providers.

  • @kemibrianolimba682
    @kemibrianolimba682 11 месяцев назад +1

    Brilliant...That was a great explanation. Keep up the great work

  • @javisartdesign
    @javisartdesign 2 года назад

    Many thanks! I did not heard ever used VerticalPodAutoescaler!! There are many ways to describe scaling for applications, I also like the Scale Cube that it ir more from the point of view how microservices can be scaled.

  • @ioannisgko
    @ioannisgko 2 года назад +1

    Thank you for the video!!! Question: how do we horizontally autoscale databases in Kubernetes? What are the challenges and what would be the proper way to overcome them? (Maybe an idea for a future video)

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +1

      Adding it to the TODO list for a future video... :)
      Until then... If designed well, DB should come with an operator that takes care of common operations including scaling and all you really have to do is change the number of replicas (unless you enable autoscaling which is still not a common option).

  • @naresing
    @naresing 2 года назад +1

    Hey Viktor.. this video is very helpful. Please make a video on HPA with Prometheus monitoring solution.

  • @acartag7
    @acartag7 2 года назад +1

    I started using jsonnet and it has been a pain to use and a steep learning curve. Few months later we moved to ytt as it was easier to manage but now we are going for Kustomize for all new projects.
    Jsonnet is really powerful but when bringing someone new to the team and you show them jsonnet, they can easily feel overwhelmed.

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      That's my main issue with Jsonnet. It's too easy to over-complicate it and confuse everyone.

  • @allengooch7
    @allengooch7 2 года назад +1

    Good stuff. I believe the units for describing CPU limits should be called millicores instead of milliseconds, however.

    • @arns9006
      @arns9006 2 года назад +1

      whatever you say, based on your avatar, you're right

  • @mateuszszczecinski8241
    @mateuszszczecinski8241 2 года назад +1

    Dziękujemy.

  • @Martin-sr8yb
    @Martin-sr8yb 2 года назад +1

    I would like to see a futrue video talking about metrics of auto-scaling like what you mentioned in the video. (Prometheus Kabana)

  • @CrashTheGooner
    @CrashTheGooner 2 года назад +1

    Master ❤️

  • @PhilLee1969
    @PhilLee1969 6 месяцев назад +1

    Great video - as a complete beginner to Kubernetes it's helped me to understand what I want to with a particular project that I'm working. I currently have a long-term process that runs under Python but runs in a single thread. Up until know I've scaled vertically by moving to more powerful machines but also horizontally by runnning additional copies of the process on different processor cores and then dividing the clients up geographically. If I've understood correctly, with Kubernetes it looks like I could run one copy but get it to spread across multiple cores or even multiple servrers as required whilst to my clients it just looks like one machine ? Do I need to do anything to my process to ready it for deployment on Kubernetes or is it just a case of setting the resource limits and scaling parameters ?

    • @DevOpsToolkit
      @DevOpsToolkit  6 месяцев назад

      Assuming that it is a stateless application, all you have to do is define HPA that will scale it for you or, if scaling is not frequent, set manually the number of replicas in the deployment.

    • @PhilLee1969
      @PhilLee1969 6 месяцев назад +1

      It's stateless (I think) as nothing is left once the application exits other than some log files. I'm definitely going to have to put together a cluster and have a go. Thanks again !

  • @salborough2
    @salborough2 7 месяцев назад +1

    Hi Victor thanks for a great video :) Just a question from my side - do you know how gitops (ie with ArgoCD) handles auto-scaling as I assume the replica count on the deployment yaml will no longer conform to the declared yaml in an autoscaling setup?

    • @DevOpsToolkit
      @DevOpsToolkit  7 месяцев назад

      Yeah. You should remove hard coded replicas or nodes when using scalers. That's not directly related to gitops. Argo CD and similar tools only sync manifests into clusters. If you do specify replicas and a scaler, the former will be overwritten by the later.

    • @salborough2
      @salborough2 7 месяцев назад +1

      @@DevOpsToolkit thanks so much Victor - ahh ok gotcha I didnt realise I could leave out the replica count in the deployment manifest - thanks :) Im going to look into this more. Also going to checkout your videos on Argo events and rollout to see how to deal with progressing a release through different environments while still using gitops.

  • @sahilbhawke605
    @sahilbhawke605 2 года назад +1

    Hey doing a great job waiting for your videos and the notification bell to buzz everytime ❤️ just a question hpa with respect to memory do we have any information for reference than it would be helpful also can we use them both simultaneously in our hpa manifest

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +1

      Don't use vpa together with hpa. They are not aware of each other and might do conflicting actions.
      If you're wondering how to deduce how much memory to assign to a deployment managed by hpa, explore Prometheus. It should give you the info about memory utilization or anything else.

    • @sahilbhawke605
      @sahilbhawke605 2 года назад +1

      @@DevOpsToolkit sure thanks for the information 💯 can you please come up with video more precise on cluster autoscaling in gke cluster and how it works like poddistributionbudget the annotation safe to evict pods how it's used the correct way would be great help of you 💯

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +1

      @@sahilbhawke605 Adding it to my TODO list... :)

    • @sahilbhawke605
      @sahilbhawke605 2 года назад +1

      @@DevOpsToolkit Sure i would be eagerly waiting ;)...Thanks for being such a great spot by sharing your valuable 💯 knowledge for us from your videos always waiting for your new video #devops 💯

  • @jarodmoser5588
    @jarodmoser5588 2 года назад +1

    Great video, would it be possible to run the VPA in recommend mode while relying upon the HPA to ensure scaling of pods? Can that combination be used to fine tune the autoscaling policies?

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      It could, but I would not rely on that. VPA recommendations might easily be incorrect due to HPA activities. I recommend using Prometheus instead.

  • @unixbashscript9586
    @unixbashscript9586 2 года назад +1

    Hi Victor, thanks for this! I'd also really appreciate a video on how to hpa based on metrics from Prometheus
    Edit: I also have a question about Karpenter. Does it scale both horizontally and vertically?

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +2

      Great! Adding it to the TODO list... :)

    • @Levyy1988
      @Levyy1988 2 года назад +1

      Karpenter scale horizontally but it have this advantage that it will add node that will handle all of your pods in pending state and not only randomly add node in one of your autoscaling groups that can be to big for your current needs.

    • @unixbashscript9586
      @unixbashscript9586 2 года назад

      @@Levyy1988 hey, thanks

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      @@Levyy1988 Exactly. That's why i said in the video that vertical scaling of nodes is typically combined with horizontal (new node, new size).
      Karpenter is a much better option than the "original" Cluster Autoscaler used in EKS. It provides similar functionality like GKE Autopilot.

  • @bules12
    @bules12 6 месяцев назад +1

    Gist is not well documented in the description! Can you fix it please?

    • @DevOpsToolkit
      @DevOpsToolkit  6 месяцев назад +1

      Sorry for that, and thanks for letting me know. It should be fixed now.

    • @bules12
      @bules12 6 месяцев назад +1

      @@DevOpsToolkit thanks for the quick response, ur the best!

  • @VinothKumar-ej2jc
    @VinothKumar-ej2jc 2 года назад +1

    When scale in/down happens how does k8s make sure there is no traffic being served by those pods.. will there be a chance where user experience interruption due to scale in of pods

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +1

      When Kubernetes decides to kill a Pod, among other things it does the following.
      1. Stop all new incomming traffic from going to that Pod
      2. Send SIGTERM signal to the process inside the containers in that Pod
      3. Wait until the processes respond with OK to SIGTERM or it times out (timeout is configurable).
      4. Destroy the Pod
      Assuming that SIGTERM is implemented in the app, all existing requests will be processed before the Pod is shut down. SIGTERM itself is not specific to Kubernetes but a mechanism that is applied to any Linux process (it might work on Windows as well, but I'm not familiar with it enough to confirm that). That means that if an app is implementing "best practices" that are independent of Kubernetes, there should be no issues when shutting down Pods.
      As a side note, the same process is used when upgrading the app (spin up new Pods and shut down the old ones) so you need to think about those things even if you never scale down.

  • @VinothKumar-ej2jc
    @VinothKumar-ej2jc 2 года назад +1

    May I know why you have deployment.yaml and ingress.yaml in overlay directory though you dont have any changes/patches to them.. you can keep them in base directory itself right.

    • @VinothKumar-ej2jc
      @VinothKumar-ej2jc 2 года назад

      Also how is replicaset is different from hpa

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +1

      You're right. I should have placed those inside the base directory. I copied those tiles from another demo and failed to adapt them for this one.

  • @swapnilshingote8773
    @swapnilshingote8773 2 года назад

    First to comment...yooo

  • @snehotoshbanerjee1938
    @snehotoshbanerjee1938 5 месяцев назад +1

    Does Kubernetes support scaling to zero?

    • @DevOpsToolkit
      @DevOpsToolkit  5 месяцев назад

      It does but that is rarely what you want. There's almost always something you need to run.

    • @snehotoshbanerjee1938
      @snehotoshbanerjee1938 5 месяцев назад +1

      @@DevOpsToolkit question is for running LLM app which is costly to run 24*7.

    • @DevOpsToolkit
      @DevOpsToolkit  5 месяцев назад

      If that is the only thing you're running in that cluster, the answer is yes. You can scale down worker nodes. However, controle planes nodes will have to keep running.
      Actually, now that i think of it, why don't you just create a cluster when you need it and destroy when you don't?

  • @owenzmortgage8273
    @owenzmortgage8273 Год назад +1

    Demo don’t just talking about it, everybody can google 100 answers about this topic. Show people what you did at an enterprise environment. What you did in real world. Don’t just read white paper

    • @DevOpsToolkit
      @DevOpsToolkit  Год назад +1

      Have you seen any other video on this channel? Almost all are with demos with a small percentage being how something works (like this one). If anything, i might need to less demos.