Kubernetes cluster autoscaling for beginners

Поделиться
HTML-код
  • Опубликовано: 23 авг 2020
  • Subscribe to show your support! goo.gl/1Ty1Q2 .
    Patreon 👉🏽 / marceldempers
    Today we're taking a look at Kubernetes cluster autoscaling.
    When you start out with Kubernetes, you add deployments and pods.
    Slowly, your nodes get full.
    The cluster autoscaler helps add and remove nodes as your capacity demands change.
    Checkout the source code below 👇🏽 and follow along 🤓
    Also if you want to support the channel further, become a member 😎
    marceldempers.dev/join
    Checkout "That DevOps Community" too
    marceldempers.dev/community
    Source Code 🧐
    --------------------------------------------------------------
    github.com/marcel-dempers/doc...
    If you are new to Kubernetes, check out my getting started playlist on Kubernetes below :)
    Kubernetes Guide for Beginners:
    ---------------------------------------------------
    • Kubernetes development...
    Kubernetes Monitoring Guide:
    -----------------------------------------------
    • Kubernetes Monitoring ...
    Kubernetes Secret Management Guide:
    --------------------------------------------------------------
    • Kubernetes Secret Mana...
    Like and Subscribe for more :)
    Follow me on socials!
    marceldempers.dev
    Twitter | / marceldempers
    GitHub | github.com/marcel-dempers
    Facebook | thatdevopsguy
    LinkedIn | / marceldempers
    Instagram | / thatdevopsguy
    Music:
    Track: jimmysquare - Mean Machine | is licensed under a Creative Commons Attribution licence (creativecommons.org/licenses/...)
    Listen: / mean-machine
    Track: jimmysquare - GMT+8 | is licensed under a Creative Commons Attribution licence (creativecommons.org/licenses/...)
    Listen: / gmt8
    Track: jimmysquare - Like Apollo | is licensed under a Creative Commons Attribution licence (creativecommons.org/licenses/...)
    Listen: / z6q0jhxtdsjr
    Track: jimmysquare - My Band | is licensed under a Creative Commons Attribution licence (creativecommons.org/licenses/...)
    Listen: / my-band
    Track: jimmysquare - Not Red Foxy | is licensed under a Creative Commons Attribution licence (creativecommons.org/licenses/...)
    Listen: / not-red-foxy
    Track: Amine Maxwell - Le Soir | is licensed under a Creative Commons Attribution licence (creativecommons.org/licenses/...)
    Listen: / le-soir
    Track: n0tavailable - {free for profit} mourner - sad trap beat | is licensed under a Creative Commons Attribution licence (creativecommons.org/licenses/...)
    Listen: / mourner
    Track: souKo - souKo - Parallel | is licensed under a Creative Commons Attribution licence (creativecommons.org/licenses/...)
    Listen: / parallel
  • НаукаНаука

Комментарии • 57

  • @MarcelDempers
    @MarcelDempers  3 года назад +7

    That was the cluster autoscaler.
    Checkout the Pod autoscaler
    👉🏽 ruclips.net/video/FfDI08sgrYY/видео.html

  • @user-qk4tx9jc4m
    @user-qk4tx9jc4m 3 года назад +3

    This is just pure gold in youtube. I feel like I have found a goldmine.

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 2 месяца назад

    This is great info, very well explained, thank you.

  • @remus-tomsa
    @remus-tomsa 3 месяца назад

    Great video man, very easy to understand and follow! Congrats!

  • @testuserselvaraj
    @testuserselvaraj 4 года назад +2

    Like the way you present it, you make it simple to grasp and understand :)

  • @partykingduh
    @partykingduh 3 года назад

    I’ve been looking for someone like you for months. Loved the presentation!

  • @CareerDelTorro
    @CareerDelTorro 4 года назад +3

    Sweet stuff! Awesome editing, very pleasant to watch :)

  • @jaymo107
    @jaymo107 3 года назад +1

    This couldn't have come at a better time, we had pods getting evicted due to insufficient memory and couldn't figure out why, this helped a lot. Thank you!

  • @rahulmarkonda
    @rahulmarkonda 2 года назад

    Holy smokes….. I learnt a lot in 12 mins.

  • @vmalj89
    @vmalj89 3 года назад

    Excellent explanation. Quick, crisp and neat.

  • @harikrishna3258
    @harikrishna3258 2 года назад

    Superb. Very concise and helpful. Thank you for sharing these insights

  • @bhdr111
    @bhdr111 Год назад +1

    Great tutorial, thank you. The music/ambiance is sometimes disturbing but still okay.

  • @ThotaMadhuSudhanRao
    @ThotaMadhuSudhanRao 3 года назад

    good one. thanks for your effort to make a quality tutorial

  • @ericansah525
    @ericansah525 Год назад

    Amazing video with great practical examples.

  • @Jadeish01
    @Jadeish01 3 года назад

    This is beyond helpful. Thank you!

  • @f.5528
    @f.5528 Месяц назад

    Very interesting video. Thank you.

  • @saransabarishs4382
    @saransabarishs4382 3 года назад

    Beautifully explained. thanks Bro !!

  • @narigina6414
    @narigina6414 Год назад

    Great explanation, thank you

  • @XEQTIONRZ
    @XEQTIONRZ 3 года назад

    Great video Sir. Very informative.

  • @martinzen
    @martinzen 3 года назад

    Excellent video my man, thanks a lot

  • @minhthinhhuynhle9103
    @minhthinhhuynhle9103 2 года назад

    As usual, damn good contents MrDemper.

  • @yovangrbovich3577
    @yovangrbovich3577 4 года назад

    Keen for the next vid! Thanks Marcel

  • @cristiancontreras2924
    @cristiancontreras2924 4 года назад

    Awesome video, greetings from Chile.

  • @exit-zero
    @exit-zero 3 года назад

    Awesome video as always

  • @szymonf5554
    @szymonf5554 4 года назад

    Thanks for another awesome video

  • @BobWaist
    @BobWaist 6 месяцев назад

    Excellent video, you really make a good job to explain things in a crisp and concise way. One question that has remained, however, is the following: you describe that a certain part of the CPU gets allocated for each of the pods, although this isn't necessarily in use. Doesn't this totally break with the idea of scalability, because now each pod has completely overprovisoned resources (i.e., they are allocated but idle)? I somehow assumed that this would be part of the autoscaling, which vertically scales the conteiners depending on the load, or was this part of your video and I missed it?

  • @vishnukr6375
    @vishnukr6375 2 года назад

    You are really great :), and thanks for the information. Please keep going ahead...............

  • @fdghjvgf
    @fdghjvgf 3 года назад

    loved it ! :)

  • @jameeappiskey5830
    @jameeappiskey5830 3 года назад

    You are Legend and you must know it

  • @flenoir34
    @flenoir34 3 года назад +2

    This is really interesting. I also found that pods can be evicted regarding lack of ephemeral storage. This is related to OS disk of my node instance which only has 30GB. I was wondering if there's a way to handle the storage parameter to avoir pods being evicted ? thanks for these videos and very nice channel

  • @raheelmasood8656
    @raheelmasood8656 3 месяца назад

    If I have to understand really what is happening behin the scene. This channel I have to come.

  • @robinranabhat3125
    @robinranabhat3125 Год назад

    Great Video :) I was just curious. Typical usecase I imagine is for nodes to scale up or down fully automatically based on number of requests. But here, we need are manually changing the number of pods.

  • @georgezviadgoglodze7810
    @georgezviadgoglodze7810 3 года назад

    Awesome

  • @AllanBallan
    @AllanBallan 4 года назад

    Awesome topic Marcel! Keepem comin. Have you tried K9s to visualize stuff in the cluster? Was thinking of givin it a spin myself...

    • @szymonf5554
      @szymonf5554 4 года назад +1

      I can't image working without K9s, but anyway it's useful to get grasp with kubectl commands

  • @hobbes6832
    @hobbes6832 3 года назад

    I was wondering why kubernetes didn't go the CNI/CSI route of abstracting away the platform specific aspects of node addition ... also It seems there's no support in kubernetes for Intel RSD PODM based dynamic node composition.. great vid!

  • @ict7334
    @ict7334 2 года назад

    Hi there. This is a very informative and comprehensive video. Thanks for that. I was wondering something you probably have an answer to. The cluster autoscaling, how much time you reckon it would take from the point where you ran out of space on your node, to the point where the new node is operational? And for the pod autoscaling, from running out of space to a new operational pod?

    • @MarcelDempers
      @MarcelDempers  2 года назад +2

      This depends, there's a thing called scaling lag. 1) metrics are 30sec delayed 2) time for scheduler to determine a new node is needed, can take few min 3) new node scaling time can take 3-5min
      4) pod create , depends on what you are running

  • @mateustanaka682
    @mateustanaka682 3 года назад +2

    Congrats, excellent video. I have a question about CPU units, in your example you said 4 Cores equals 4096m. Should not be 4000m? Do we measure millicpu as well as memory?

    • @MarcelDempers
      @MarcelDempers  3 года назад +3

      Yes you're right, my bad. It should be 4 cores = 4000m :)

  • @mayureshpatilvlogs
    @mayureshpatilvlogs 3 года назад

    Excellent explanation. Keep it up.
    My doubt is if we auto scalling have to scale down node after we have sufficient resources. Then what will happen to the pods which are already in running state .?
    Thanks

    • @MarcelDempers
      @MarcelDempers  3 года назад +2

      Thanks for the kind words
      The cluster autoscaler will only scale down if the node is not utilised. Kubernetes will not interrupt pods to scale nodes down.

    • @mayureshpatilvlogs
      @mayureshpatilvlogs 3 года назад

      @@MarcelDempers
      Thanks for the reply it really mean a lot. but let's see we have one pod which serving request and controller know that node is under utilizes. They will that pod dies or it will wait until it finishes the request.

  • @devops_scholar
    @devops_scholar 3 года назад +1

    Hi Marcel, thanks for the great content firstly. I have seen your video twice but confused in one thing - as when you scaled to 12Pods, and you mentioned that yr computer has 4 core - all exhausted in (almost 8 pods running instance) - then howcome Autoscaler would add 1 more node to yr K8S cluster when core machine is not having any CPU left

    • @MarcelDempers
      @MarcelDempers  3 года назад +1

      A cluster autoscaler will only add a node when total requested CPU exceeds available CPU in that node. Pods would usually wait in a 'Pending' state until either CPU is freed up or a node is available thay satifies CPU request requirement for that Pod. Hope that helps 💪🏽

  • @karthikrajashekaran
    @karthikrajashekaran 2 года назад

    I have K8 using EKS , Do you have steps to implement autoscaling into Kubernetes cluster?

  • @whooo71
    @whooo71 3 года назад

    if you talking about scaling then you should talk about billing usage. scaling is a good but it could be a bad also for your money.

  • @TrueTravellingCoder
    @TrueTravellingCoder 4 года назад

    I am first one to like the video :)

  • @flenoir34
    @flenoir34 3 года назад

    very interesting. As i try to use "memory limits" i get sometimes "Out of memory" and my pod process is killed. I thought it would trigger another node instead. should i remove the limits ?

    • @MarcelDempers
      @MarcelDempers  3 года назад

      limits are more for last resort protection. You should ideally use resource request values and set the pod autoscaler around that. Checkout my pod autoscaler video for more info about scaling pods. The node autoscaler is only triggered when there is no space to schedule another pod.

    • @flenoir34
      @flenoir34 3 года назад +1

      That DevOps Guy yes, will check this. Thanks a lot. Love your youtube channel, all my support !

  • @darkwhite2247
    @darkwhite2247 3 года назад

    why are you assigning 1 core to two process? assigning 500millicore to a process has some advantage over allowing the process to use the entire core?

    • @MarcelDempers
      @MarcelDempers  3 года назад

      There is no advantage over using an entire core or splitting them unless you know the required consumption of your workload. It really depends on understanding how much CPU your pod needs. If you don't know, i recommend you start with as little as possible. And use your monitoring to figure out best recommended CPU. There is good app called Goldilocks which is great at finding recommended CPU usage based on actual consumption over time . github.com/FairwindsOps/goldilocks

  • @IoneHouten
    @IoneHouten 3 года назад

    i am using kubernetes 1.19
    I get an error like this
    unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http: heapster :)
    how to solve it?

  • @chornsokun
    @chornsokun 4 года назад

    Save me googling time

  • @Bangaram007
    @Bangaram007 7 месяцев назад

    Hoala India.