Kubernetes Node Autoscaling with Karpenter (AWS EKS & Terraform)
HTML-код
- Опубликовано: 7 авг 2024
- 🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com
👉 [UPDATED] AWS EKS Kubernetes Tutorial [NEW]: • AWS EKS Kubernetes Tut...
▬▬▬▬▬ Experience & Location 💼 ▬▬▬▬▬
► I’m a Senior Software Engineer at Juniper Networks (12+ years of experience)
► Located in San Francisco Bay Area, CA (US citizen)
▬▬▬▬▬▬ Connect with me 👋 ▬▬▬▬▬▬
► LinkedIn: / anton-putra
► Twitter/X: / antonvputra
► GitHub: github.com/antonputra
► Email: me@antonputra.com
▬▬▬▬▬▬ Related videos 👨🏫 ▬▬▬▬▬▬
👉 [Playlist] Kubernetes Tutorials: • Kubernetes Tutorials
👉 [Playlist] Terraform Tutorials: • Terraform Tutorials fo...
👉 [Playlist] Network Tutorials: • Network Tutorials
👉 [Playlist] Apache Kafka Tutorials: • Apache Kafka Tutorials
👉 [Playlist] Performance Benchmarks: • Performance Benchmarks
👉 [Playlist] Database Tutorials: • Database Tutorials
=========
⏱️TIMESTAMPS⏱️
0:00 Intro
0:52 Cluster Autoscaller & Karpenter & AWS Fargate
1:29 Create AWS VPC Using Terraform
2:22 Create EKS Cluster Using Terraform
4:12 Create Karpenter Controller IAM Role
5:36 Deploy Karpenter to EKS
6:18 Create Karpenter Provisioner
7:02 Demo: Automatic Node Provisioning
=========
Source Code
📚 - Tutorial: antonputra.com/amazon/kuberne...
#AWS #Karpenter #DevOps Наука
🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com
👉 [UPDATED] AWS EKS Kubernetes Tutorial [NEW]: ruclips.net/p/PLiMWaCMwGJXnKY6XmeifEpjIfkWRo9v2l&si=wc6LIC5V2tD-Tzwl
Efficient, fast enough, practical and super clear
Really amazing video, well thought and direct to the point
I thank you very much
thank you, Yasser!
Excellent tutorial. Very Quick and informative. Thanks for making this
Thanks Pratap!
Many thanks for the tutorial you are a life saver.
Nice video.
Another cool approach is to not use any (self or eks managed) node groups. After creating eks cluster, you deploy coredns and karpenter on fargate, that should be enough to bootstrap EC2 worker nodes.
Thank you for this outstanding video (your tutorials are truly exceptional). I have a small request: could you please provide an update to this one? The Helm chart featured in the video for deploying Karpenter hasn't been updated in about 2 years.
I like this method of deploying EKS clusters with Karpenter.
take a look at this one, i have a section for karpenter
🔴UPDATED🔴 How to create EKS Cluster using Terraform MODULES (AWS Load Balancer Controller + Autoscaler + IRSA) - ruclips.net/video/kRKmcYC71J4/видео.html
👉 How to Manage Secrets in Terraform - ruclips.net/video/3N0tGKwvBdA/видео.html
👉 Terraform Tips & Tricks - ruclips.net/video/7S94oUTy2z4/видео.html
👉 ArgoCD Tutorial - ruclips.net/video/zGndgdGa1Tc/видео.html
super nice
Thank you!!
Thanks!
Thank you Ricardo!
Get Full-Length High-Quality DevOps Tutorials for Free - Subscribe Now! - ruclips.net/user/AntonPutra
Great tutorial Anton, thanks! I would suggest to call subnets like private-az1, public-az1, etc. I used a different region and got a little bit confused with the subnet names. Nonetheless, great work, thanks again.
Thanks! Sure, most of the time in the real world, you include AZ as a suffix.
Nice tutorial I will try to implement it.
Thanks!
Hi Anton, First of all this is great tutorial and it boosted my clarity on Karpenter. I have two queries.
1. Where are all these pods actually running. In local? or in AWS?
2. What puts our k8 deployment from yaml and kubectl to AWS?
Thank you.
Thanks!
1. All pods are running in AWS (EKS) including the Karpenter controler pod
2. When you run kubectl apply -f .yaml, kubectl uploads that yaml config to remote kubernetes running in AWS. Based on that yaml config, Kubernetes downloads pods and specifies how to run them on EC2 instances.
Thanks for your effort ! At 7:18, how can we configure without giving resources ? I want to scale up while pod is running actually.
You don't have to (but should), if newly created pods stuck in pending state, karpenter increase resources to fit those pods.
@Anton Putra 7:12 does karpenter node creates a separate node than managed node ? if so looking at deployment yaml I don't understand how does it understand which node to pick up
Karpenter creates standalone EC2 instances and adds them to the Kubernetes node pool. On the other hand, the autoscaler uses AWS Auto Scaling groups.
Thanks for the video! I am running this on a pipeline. Is there a better way to execute the provisioner.yaml file without creating in the cluster right after creating karpenter? I want to be able to manage everything through pipeline without any manual step of logging to the cluster from command line. Thanks
Sure, you can use helm terraform provider for example, or kubectl provider
I may have found out how to get my graviton nodes in eks but similar to fargate provisioning? Will see.
try this - github.com/antonputra/tutorials/blob/main/lessons/150/terraform/7-nodes.tf#L48-L49
What will happen if on same cluster I have cluster autoscaler installed? Which component will handle the node scaling if both of them are deployed?
you'll get race condition =) don't do it
It didnt work for me. The nodes dont scale and the pods keeps in pending state. help !
Well, EKS evolves, and they may have introduced some breaking changes, or you might have misconfigured something. It's hard to say. You need to start from looking for error messages in the karpenter controller.
how to tag ec2 instance launch by karpenter
take a look at EC2NodeClass custom resource - github.com/aws/karpenter-provider-aws/blob/main/examples/v1beta1/general-purpose.yaml#L34C7-L34C19
So fast
on purpose, you can find code and commands in description