- Видео 66
- Просмотров 20 255
Drewbernetes
Великобритания
Добавлен 1 сен 2010
I make learning Kubernetes & Linux Easy!
I figured it was time to share the knowledge I had gained over the years with as many people as possible and this seemed like a good place to do it.
No presumptions of knowledge are made over here and if it is, it's because I've already covered it in another video.
I figured it was time to share the knowledge I had gained over the years with as many people as possible and this seemed like a good place to do it.
No presumptions of knowledge are made over here and if it is, it's because I've already covered it in another video.
Installing Ubuntu 24.04 With RAID
Let's learn how to install Ubuntu 24.04 - partially because my old Ubuntu 22.04 video needed a refresh and partially because I hate the fact that that particular video still tops my most watched - It's back when I was 'Learn With Drew' and straightened my hair to impress you dear RUclips viewers - and I hate it! Time to kick it off the top of the charts.
Original Ubuntu 22.04 Video: ruclips.net/video/YCfYWCxirP8/видео.html
Accessing Machine with SSH: ruclips.net/video/nv43nSmBmR4/видео.html
Linus Tech Tips RAID 5 failure: ruclips.net/video/gSrnXgAmK8k/видео.html
====================
Because I mentioned them, here are all the other lovely RUclipsrs I mentioned during this video. It'd be wrong o...
Original Ubuntu 22.04 Video: ruclips.net/video/YCfYWCxirP8/видео.html
Accessing Machine with SSH: ruclips.net/video/nv43nSmBmR4/видео.html
Linus Tech Tips RAID 5 failure: ruclips.net/video/gSrnXgAmK8k/видео.html
====================
Because I mentioned them, here are all the other lovely RUclipsrs I mentioned during this video. It'd be wrong o...
Просмотров: 309
Видео
Configure OIDC access to Vault in Less than 10 Minutes!
Просмотров 2734 месяца назад
Following on from the Authentik video I previously posted, we're going to be getting OIDC auth enabled on Vault and using our Authentik installation for authentication! Setup Authentik in 10 Minutes: ruclips.net/video/9JHyDM8XpK8/видео.html Impact Intermezzo Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 3.0 License creativecommons.org/licenses/by/3.0/ 00:00 - P...
Learn Authentik in 10 Minutes
Просмотров 4204 месяца назад
We're going to look at Authentik today. First we'll look at how to install it, then we'll begin setting up a Provider and Application which we'll use in Vault in the next video! 00:00 - Intro 00:23 - What is Authentik? 01:47 - The Helm Chart 02:07 - The Helm Values 03:00 - A Note on the Domain 03:56 - Adding the Helm Repo 04:17 - Installing Authentik 05:07 - Initial Setup After Install 05:44 - ...
Learn Vault in 10 Minutes
Просмотров 1297 месяцев назад
Learn Vault in 10 Minutes or less (or more)! We're going to FLY through Vault and come out at the end wanting more. There is a lot to go through here in 10 minutes (actually 12:30) and there will be more to come on Vault, but this will get your started with it on Kubernetes Vault: developer.hashicorp.com/vault/docs/install Vault Chart: github.com/hashicorp/vault-helm My GitHub Repo: github.com/...
Learn Helm in 10 Minutes
Просмотров 2148 месяцев назад
Learn Helm in 10 Minutes or less! You're going to speedrun Helm and come out at the end wanting more. But don't worry, not only will more be coming but there is also a bunch of links here for you to digest in the meantime. Helm: helm.sh/ Bitnami Charts: github.com/bitnami/charts 00:00 - Intro 00:21 - What Is Helm? 00:36 - Installing Helm 01:11 - Why Use Helm? 01:41 - Creating a Chart 01:51 - An...
Kubernetes Pod Disruption Budgets | How to keep your workloads online during planned maintenance
Просмотров 13610 месяцев назад
A Pod Disruption Budget helps you keep that performance and availability high while scheduled maintenance happens in the background. Losing node? Pffft, nothing to it! 00:00 - Intro 00:22 - Setup 00:41 - Case study and why you should use this 02:58 - What is a Pod Disruption Budget? 03:45 - How to create a Pod Disruption Budget 04:44 - Max Unavailable & Min Available comparison 06:44 - Seeing i...
Kubernetes Webhooks | How to create and use Validating and Mutating Webhooks
Просмотров 19111 месяцев назад
Let's take a look at the validating and mutating webhooks to decide what can go on our cluster and what they should look like! 00:00 - Intro 01:27 - The Custom Controller 03:54 - Deploying the Custom Controller 04:42 - Creating a Certificate Authority using Cert Manager! 05:45 - Validating and Mutating Webhook Configurations 07:37 - See it in Action 09:18 - Wrap Up
Kubernetes Pod Probes | How to ensure a Pod is ready for action
Просмотров 70Год назад
When Pods decide their own fate you're heading for trouble. Let's give them a nudge with Probes to ensure they understand when they're ready to server traffic and when to consider themselves no longer viable. Liveness, Readiness and Startup Probes: kubernetes.io/docs/tasks/debug/debug-cluster/audit/ 00:00 - Intro 01:08 - Liveness Probes - Execute Command 01:57 - Thresholds 04:55 - Liveness Prob...
Kubernetes Termination Logs | How to get exit messages from your Pods
Просмотров 135Год назад
Sometimes we find a pod has restarted and it's nice to know why something terminated or completed. Thankfully, there is an app for that! We can use the termination log to write messages out to the lovely debuggers in our world so they can understand exactly why something completed or terminated. Termination Log (Determine the Reason for Pod Failure): kubernetes.io/docs/tasks/debug/debug-applica...
Kubernetes Security - Audit Logs | Who goes there?! How to log actions on your cluster
Просмотров 212Год назад
We've got some users using the cluster cluster now and that production app cannot be changed manually! Let's get some audit logs in place so we can find out who did what, when, where and more. It's time to get auditing! Audit Logs: kubernetes.io/docs/tasks/debug/debug-cluster/audit/ Kube APIServer: kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/ 00:00 - Intro 01:58 - A...
Kubernetes Security - Seccomp | How to use it to restrict Kernel calls in your Pods
Просмотров 274Год назад
Kubernetes Security - Seccomp | How to use it to restrict Kernel calls in your Pods
Kubernetes Security - AppArmor | How to prevent unwanted actions in your Pods
Просмотров 297Год назад
Kubernetes Security - AppArmor | How to prevent unwanted actions in your Pods
Kubernetes Security - Pod Security Standards | How to use them to enforce security contexts
Просмотров 747Год назад
Kubernetes Security - Pod Security Standards | How to use them to enforce security contexts
Kubernetes Security - Security Contexts | How to use them to make your Containers and Pods secure
Просмотров 183Год назад
Kubernetes Security - Security Contexts | How to use them to make your Containers and Pods secure
Kubernetes Security - RBAC | Don't let people run loose with admin permissions on your cluster
Просмотров 125Год назад
Kubernetes Security - RBAC | Don't let people run loose with admin permissions on your cluster
Upgrading Kubernetes Clusters | How to perform an upgrade using Kubeadm
Просмотров 174Год назад
Upgrading Kubernetes Clusters | How to perform an upgrade using Kubeadm
Backing Up And Restoring ETCD | How to ensure the state of your Kubernetes Cluster isn't lost!
Просмотров 98Год назад
Backing Up And Restoring ETCD | How to ensure the state of your Kubernetes Cluster isn't lost!
You Need To Update Your Kubernetes Package Repos TODAY! (September 2023)
Просмотров 132Год назад
You Need To Update Your Kubernetes Package Repos TODAY! (September 2023)
Kubernetes Network Policies | How to use them to ALLOW and DENY network traffic in your cluster
Просмотров 122Год назад
Kubernetes Network Policies | How to use them to ALLOW and DENY network traffic in your cluster
Kubernetes Init Containers | What they are and how to use them
Просмотров 129Год назад
Kubernetes Init Containers | What they are and how to use them
Troubleshooting Kubernetes Node Errors | How to find them and fix them
Просмотров 181Год назад
Troubleshooting Kubernetes Node Errors | How to find them and fix them
Troubleshooting Kubernetes Pod Errors | How to find them and fix them
Просмотров 130Год назад
Troubleshooting Kubernetes Pod Errors | How to find them and fix them
Cert Manager & External DNS | How to automate your DNS and TLS Certificates in Kubernetes
Просмотров 848Год назад
Cert Manager & External DNS | How to automate your DNS and TLS Certificates in Kubernetes
Kubernetes Ingresses And Loadbalancer Services | How to make your apps accessible from the internet
Просмотров 279Год назад
Kubernetes Ingresses And Loadbalancer Services | How to make your apps accessible from the internet
Kubernetes Affinity and Anti-Affinity | How can you use them to attract or repel workloads?
Просмотров 187Год назад
Kubernetes Affinity and Anti-Affinity | How can you use them to attract or repel workloads?
Kubernetes Taints, Tolerations and Node Selectors | How to make workloads target specific nodes
Просмотров 100Год назад
Kubernetes Taints, Tolerations and Node Selectors | How to make workloads target specific nodes
99.99999% Uptime in Kubernetes | How to keep your workloads online during an application upgrade
Просмотров 98Год назад
99.99999% Uptime in Kubernetes | How to keep your workloads online during an application upgrade
Scaling your Kubernetes Apps | How to scale your Pods and Nodes
Просмотров 104Год назад
Scaling your Kubernetes Apps | How to scale your Pods and Nodes
Kubernetes Environment Variables | How to configure EnvVars in your Pods
Просмотров 101Год назад
Kubernetes Environment Variables | How to configure EnvVars in your Pods
Kubernetes Cluster DNS | How does DNS work in a Kubernetes Cluster?
Просмотров 361Год назад
Kubernetes Cluster DNS | How does DNS work in a Kubernetes Cluster?
When using RAID for 2 disks, does the size of the partitions depend on what RAID level is used? I followed along, but selected RAID 0 instead of RAID 1. This resulted in me having a 2G boot partition (instead of 1 in your example). I assume all sizes should be halved, since they will be added together with RAID 0. Is this correct?
Yeah that's exactly it. Because RAID 0 combines the two partitions from each disk into 1 logical partition, where the data is striped across both, combining two 1 GB partitions into a RAID 0 setup will result in 2GB available. I should have made that bit clearer in the video really, but with me focussing primarily on RAID 1 with the redundancy element, I forgot to mention that caveat with RAID 0.
Thanks Drew for you video. I am a beginner and was very easy to follow. I do have a question if it is okay. I have a 2 x 1TB SSD and want to install CloudPanel, wanted to ask if you would recommend the Partition structure to use when installing Ubuntu Server 24.04. Thanks in advance.
Hi! Glad to hear it helped! With regards to your setup, it kind of depends on your use case. If you're doing this for fun and are just playing around with no real regard for the data, then no RAID or even RAID 0 would be absolutely fine with a boot, root and swap partition configured. this will give you speed and the 2TB available but now protection in the case of a failure. I wouldn't go to town configuring partitions for each site as any decent hosting panel (I've not used this one before) should be able to create a secure enough separation of resources on a single partition. If however this data matters and you couldn't bear losing it, go with the setup I've put in this video. That way you only get 1TB of storage but you get protection against the loss of a disk.
Is it possible to make SSH with GUI? So the user can connect to the remote machine via SSH but to get a graphical interface (basically desktop view) of the remote machine?
Hi, I'm not aware of anything that uses SSH but you could install a VNC server like tightvnc and then connect via that. You would of course need a desktop of some sort to be installed (gnome, cinnamon etc) for this to work.
Great Vedio !!
Thanks!
What do you do when your ISP changes your IP? Do you just manually update it on Cloudfare everytime that happens?udo when your ISP changes y
I just use ddclient on one of my linux machines. This will detect if a change has occurred and update the record in Cloudflare if required. Cloudflare have a list of tools you can use with them, here - developers.cloudflare.com/dns/manage-dns-records/how-to/managing-dynamic-ip-addresses/. If I recall correctly, ddclient supports a few different providers.
so fast speak!
Yeah I do need to control my speed :-D
Hey, I’m running into an issue. Every time I try to start the control plane node, it gets stuck during the API check and eventually exceeds the deadline. I’ve tried using Kubernetes and also set up Calico, but I still can’t get past this problem with the API server.
Hi! So unfortunately there can be a raft of issues which may stop the API server coming up from a node name not being correct/matching what it expects, to network issues, to a misconfiguration and more. Is there anything specific in the kube-apiserver logs that stand out? I get it is timing out, but if you use crictl on the node (presuming you're using containerd of course) then you can get the list of containers, one of which will be the API server and then get the logs for that container. That will highlight the reason why it is failing to come online for you. The logs will give you a point in the right direction of what to check next. :-) Hope that helps you solve it!
Simpler impossible, tnx man!
No problem!
10:35 Mate I was going bonkers for a week trying to set up a HA cluster until I came accross your channel. You can edit as you see fit, I've gone through lots of videos and this is perfect.
Haha nice one! Yeah, it was one of those scenarios where I considered chopping it out or recording it again but in the end thought: "Naaa, leave it in". It's good for people to see errors really - we all make them and anyone who pretends they are flawless on RUclips... well they're not :-D" Glad it helped though!
Thank you so much!
You're welcome!
SURPASSING 👍 Liked and subbed!
Thanks! Welcome 🙂
@@Drewbernetes May I cordially ask . . . what was your reason(s) for 3 raids, md0, md1, and m2? Instead of just one raid with 3 logical volumes atop a single LVM volume group? Edit to add: I wonder whether mirrored SSDs will properly wear-level with a raid1 swap space being written to? Kindest regards, friends and neighbours. P.S. I am inclined to just make a single raid, a single volume group on top of that, and then as many logical volumes as needed from there. Yes, for UEFI, tick off the 'use as boot drive' before everything else.
@@chromerims You sure can! I'll be totally honest, it's a habit from my DC days to reduce blast radius should one array fail! That's literally it 😀. The way you're looking at doing it is 100% fine! There will be (very minor) performance tradeoffs doing it that way but it's negligible for the most part. But there are pros such as simplicity for partition management should you decide to increase the disk. What I've done isn't "the right way", it's just one of many ways. Hope that answers the question!
@@Drewbernetes Thank you 👍
@@chromerims No worries!
Thanks for this great straightforward guide!
No problem at all! Thanks for watching 🙂
Hey Drew , Thanks great video . been trying to get HA setup for few days now , after few goes I ran into problems with Kubernetes Version 1.29,1.30 and 1.31 control plane doesn't initialise with versions 1.28 it just works every time, would appreciate any feedback or anyone else has any thoughts . Thanks
Hi! The process should generally be the same no matter which version you're using however the config may change slightly. It could be something as simple as a Feature Gate not being supported anymore or being moved into GA. What's happening when you start the process? Where is it failing? Depending on where in the process it fails for you, you should be able to check things like the kube-apiserver logs and the kubelet logs. These are your two main sources of errors.
@@Drewbernetes Thank you for the quick feedback , it doesn't seem to add the second IP address for kub-vip on the nic but I will dig into the logs more as you suggested. oh by the way great channel , gone through most of your videos :)
@@anilpatel-ds3nx Thanks very much! Yeah have a check through those logs. Also check out the logs for kube-vip too. I've updated my Kubernetes installation to 1.31.0 today to check things over and everything is working as I would expect, so it does definitely work 🙂. However that's an upgrade from 1.30.4 where it was already installed. It's not a fresh installation. That being said if it wasn't going to work, it wouldn't work on the upgraded cluster either 🙂
@@Drewbernetes i tried to post workaround link here but seems my message don't get posted trying again command pre-kubeadm: sed -i 's#path: /etc/kubernetes/admin.conf#path: /etc/kubernetes/super-admin.conf#' \ /etc/kubernetes/manifests/kube-vip.yaml command post-kubeadm (Edit note: this causes a pod restart and may cause flaky behavior): sed -i 's#path: /etc/kubernetes/super-admin.conf#path: /etc/kubernetes/admin.conf#' \ /etc/kubernetes/manifests/kube-vip.yaml
Sorry the filter is grabbing things and I never get notified 🤦♂️. I'll have to have a look into this as I've not experienced this myself as of yet. I did see a version of kube-vip that did seem to restart fairly often but since an upgrade it seems stable again so could well be the same thing you're seeing. I'll see if I can track this one down 😉
Thanks a lot for the explanation. In my case, I have an issue at the end of the OAuth2 configuration: when I click on 'Finish', nothing happens... I tried another browser, but nothing. After some research, I'm not the only one, and I may have to consider reinstalling AuthentiK. By the way, thanks 👍. I'm on version 2024.8.0, and I may need to downgrade to 2024.6.4.
No problem! I've not actually upgraded to 2024.8.0 yet - I planned to this week/weekend so I'll have to keep an eye out for that bug.
Great explanation, Thank you Drew
No problem at all!
Nice bedeo❤❤❤❤❤❤
👍
tnks a lot bro, watching all this series now.
No problem at all. Enjoy!
Excellent indeed. Don't understand why the number of views isn't much higher.
Thanks so much! However, I'm the worst promoter in the world which probably doesn't help 🤣. I post on all the socials once when the videos are released and then move on as I don't really use social media! Maybe I should spend some more time on shameless self promotion to push it out more though. I should get inspired by Brett Fisher, NetworkChuck and Jeff Geerling! 🤔
@@Drewbernetes Yeah I know the mall. At least I have found you anyway now.
Super! The best I have seen sofar
Thanks very much!
Brilliant video, absolutely perfect.. subscribed!!!
Thanks very much!! Welcome!
How does the disk configuration change for a RAID 5? In 22 and 24 do we still need the swap partition, I thought they were using a swap file now?
Hi, On the face of it, the disk configuration remain the same. You'd need minimum of 3 disks for RAID 5 though, and all would need to be set as bootable. However, I would strongly recommend against using RAID 5 on any disks that are larger than 1TB in size. It can cause all sorts of rebuild issues and you'd be better off using RAID 10 (1 + 0). As for swap, yes, if you don't create a swap partition then it will indeed create a swap file by default, that is correct! I'll be honest though, I don't even bother creating swap space of any kind these days. Machines have so much memory now that I can't actually remember the last time I hit my swap space in terms of running out of memory. If I was installing Linux on a laptop I'd still have it for the sake of hibernation/sleep, but on a server I don't see the need really :-) Maybe I'll do a RAID 5 example soon. I need to update this video for Ubuntu 20.04 anyway ;-)
Awesome... Clear concise and to the point every time. Keep it up I could listen to you for hours on any topic
Thanks very much @pfsykes !
Hi! Thank you for a very good video. I want to ask: kube-api-server and apiserver-advertise-address ips should be different or they can be the same?
Hi, Thanks very much. So the "kube-api-server" in this video is just an DNS alias that resolves to the Kube-VIPs' virtual IP address. This is like having a real domain pointing to a loadbalancer. It also means your certificates are generated using that domain name instead of an IP allowing you to change the underlying IP without having to regenerate certificates. For example, say you owned the domain my-kube-cluster.example.com, you could point that to a load balancer which had an IP of 1.2.3.4 and this would then route traffic through to (in this case) the three nodes that run as control planes, which may have 192.168.0.201-203 as their (local) IPs. The advertise-address is the IP address that is used by other members nodes of the cluster. so in my case, it's the local IP address of the 1st control plane node. So yes, in theory you could use the same IP address for both fields, but it's not recommended in a HA setup as all traffic will hit one node before being directed to the correct location. If you were setting up a single control plane node, it would be fine to do this but to be honest, I'd still setup Kube-VIP and use a DNS record (or host file adjustment as I do in this video) as it would allow me to A. add more control plane nodes at a later date B. change the ip address without having to regenerate all of the cluster certificates I hope that helps and clarifies this!
thank youuuuuu
No problem at all!
Thank you very much for your very clear, structured and easy-to-follow guidlines. I am looking forward to a time where your valueable inputs and time to answer comments will find proper reward for you and every other active contributor. 🙏
Thanks so much! It means a lot that people find this useful and when people take the time to comment - good or bad, it's all taken in and I read every one! It's great to hear you've found this useful!
Mine fails right before I get to the lvm screen. Just after the 12:04 screen. I have tried to use versions 22 and 24. I cant figure it out
Hi. That sucks to hear! What is happening for you when it fails? Are you getting any errors? If the installer is crashing or claiming "an error occured" then it's usually, in my experience, a sign of a corrupted install ISO - though with it happening on both them that seems unlikely too. I can promise this process 100% works though 🙂. I've done this countless times and others have used this method too as evidence by their comments on here. I can't speak for 24.04 yet, as I'm yet to go through it on there believe it or not! I imagine the process is the same though.
thanks! possible to have a update video which use security context?
No problem :-) I do have a video on Security Contexts already over here -> ruclips.net/video/JYVCdpYHUhg/видео.html Is this what you were looking for?
Very clear and easy to follow! Thanks!
I'm glad you found it useful. Thanks!
Thanks for the videos, Recently we are using external secrets really helpfil
Thanks! I plan on doing a video on that asap. Authentik next, then I'll dive in the External Secrets Operator integration with Vault.
Just wanted to say awesome work keep it up... Your videos are a no nonsense guide and have helped me solve so many issues 🤩. The github link is broken 404
Hi and thanks so much! And yeah, silly me! It was still private 😀. I've made the repo public now so you should be able to see it all. Thanks for bringing it to my attention!
Great video. I was looking for how to setup RAID and this was very easy to follow. Thank you! One question though, when I do the first step: "Use As Boot Device", I end up with a `/boot/efi` mount point. Do I still need to create the `/boot` mount point in that case? I ended up with 4 mount points instead of 3 because of this: (details on this screenshot: i.imgur.com/17iqFA3.jpeg) / /boot /boot/efi SWAP
Hi! Sorry for not responding. It seems I'm still getting the hang of this RUclips thing and it went into "for review" comments for some reason. Technically no, you don't need to have a boot partition at all. I generally create on to separate it out from the root FS though. Whilst it comes down to a matter of preference on the whole,there are some benefits such as reducing file system complexity which can improve the bootup process due to less demand being placed on the bootloader.
@@Drewbernetes It sounds like /boot and /boot/efi are not redundant then. Sounds good, thank you for your response. :)
i followed exact same method but when I do kubeadm init, i am getting below error ( on RHEL7 and ppc64le arch, using kubeadm 1.29 and kubevip 0.7.2 ) Mar 14 14:48:13 dx11520-hs kubelet[40620]: I0314 14:48:13.740056 40620 kubelet_node_status.go:73] "Attempting to register node" node="dx11520-hs" Mar 14 14:48:14 dx11520-hs kubelet[40620]: E0314 14:48:14.264412 40620 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"dx11520-hs\" not found" Mar 14 14:48:15 dx11520-hs kubelet[40620]: E0314 14:48:15.934387 40620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"kube-api-server-endpoint:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/dx11520-hs?timeout=10s\": dial tcp 192.168.16.202:6443: connect: no route to host" interval="1.6s" Mar 14 14:48:15 dx11520-hs kubelet[40620]: E0314 14:48:15.934410 40620 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"kube-api-server-endpoint:6443/api/v1/nodes\": dial tcp 192.168.16.202:6443: connect: no route to host" node="dx11520-hs" It looks like kube-vip is not able to bring the VIP up. Please give some suggestions. I have added etc hosts entry for the master node on which I am running these commands as well as DNS entry for VIP
Hi! Sorry for the delay in responding. I'm not sure why but this comment ended up in the "for review" section and I've only just seen it 🤦♂️. So based on the error it means the node "dx11520-hs" cannot be resolved from the node on which you're running kuebadm init on. If you try and ping that hostname, do you get a response? If not then there is something not working around the hosts file entry you've added. The host file approach is one method but if you have a DNS server that can resolve the hosts, that would be better. I'd start by pinging the hostname first and seeing what you get back. The routing is not working from what I can see in the error though.
I liked this and I will share it with my team
Glad to hear it! Thanks for watching.
I have been so frustrated getting anything done with Raid in Ubuntu, until i saw this video. Thanks!
Glad I could help!
kubeadm init fail with ip and dns no route to host
Hi! Sorry to hear that. Are you able supply any more information on this? It should be working if you've followed along with the tutorial. can you copy and paste the kubeadm init command you're running? Also confirm the configuration file you're supplying is valid too - it could be a typo that is causing this. That being said some things you can check is from the node are: ping google.com dig google.com nslookup google.com If any of those fail then you have an issue with the node itself. In which case you'll need to resolve those first before continuing.
@@Drewbernetes this is the command i am running kubeadm init --control-plane-endpoint vip-k8s-master --apiserver-advertise-address 192.168.1.16 i set record in etc/hosts when i do nc -v 192.168.1.40 6443 nc: connect to 192.168.1.40 port 6443 (tcp) failed: No route to host port 6443 is allowed in ufw i am using ubuntu 22.04.2
Hi, sorry this comment ended up in "held for review" for some reason. So you have "192.168.1.40 vip-k8s-master" in /etc/hosts? If so then as long as you've correctly configure the kube-vip steps, this should work. I would recommend running "crictl ps" and reviewing the logs for the containers that were successfully created. Kube-vip creates an additional IP on the interface you've supplied to it so as long as that configured, and the container is running, it should do that. Also check the interface itself to ensure the IP has been added."ip a" will list all the interfaces and addresses associated with them. Hopefully that will help you get to be bottom of why this isn't working for you.
Same issue here
This channel is so underrated! I'm loving the videos, I hope you go more in depth in future installments.
Thanks so much! I hope to keep going for as long as I can on various topics around Kubernetes.
life saver 🙏
I'm glad it helped!
These installation videos are incredible! I've spent the last two days following along with your three installation videos with my own cluster (and scripting the entire thing). There's no tricks, just you doing it with us, and there's something special about that. Like a humble class TA. Great job, keep it up!
Thanks so much for the kind words! I wanted this series to be exactly that. I wanted to presume no prior knowledge and just take people through the whole process so I'm glad that it's come across that way!
Hi have you made a video on set up Kubernetes with external etcd cluster with VIP.
Hi, This video does use a VIP for the Kubernetes cluster via KubeVIP and an External ETCD cluster. Do you mean to use a VIP for the External ETCD cluster itself? If so, I'm not sure if that's possible (or maybe recommended) to be honest due to how ETCD is intended to be used. KubeVIP just works as a real-world LoadBalancer would work in that it provides a single IP that you can use to hit any API endpoint.
Great series! Like the one for Linux. Thanks Drew for this work. It's making me much more comfortable with these technologies on a daily basis. Can't wait to see the rest.
Thanks very much! It's good to hear people are finding it useful. I'm taking my time with each one this year as I did myself in trying to get one out almost every week throughout last year and the quality wasn't where I felt it could be! Still, there is a Helm video coming up in the next week or so!
Maybe it's more a question of conforming to the codes of the youtube algo than of quality, I think, because your videos are already often clear and well illustrated with examples, which is the most important thing when you want to make explanatory technical videos in my opinion.@@Drewbernetes
@@letsops yeah I certainly don't know the algo, that's for sure! But then I'm not about the numbers or making money from it all. For now it's about trying to teach people things in as clear a way as possible. Now I've done the in depth videos, I'll look at the "K8S in 15 minutes" and things like that, but they'll be quick vids rather than in depths technical explanations. I wanted the in depth stuff done first so they were there for the people who wanted to know more from the quick vids.
I totally understand! That's why I mainly enjoyed this series. It's much more comprehensive and detailed than most of the other videos out there.@@Drewbernetes
@@letsops thanks!
What's the rush?
I rush everything unfortunately! I just have to get things done... and now :-D
@drewbernets hey Drew I'm when i test for cluster health im getting tcp dail connection refused, how to solve this
Hi! If you're seeing something like Get "localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused then I would start by double checking your kubeconfig to ensure it's configured correctly. It should be pointing to the IP address or DNS (if you have configured one) for KubeVIP. You can target the kubeconfig directly by setting `KUBECONFIG=/path/to/config` or by adding the flag to your kubectl command `--kubeconfig=/path/to/config`. If you're seeing that error above but with the IP or DNS name you've configured then it could be a firewall issue or that the KubeVIP Pod isn't running. In this case, you can rule out KubeVIP first by accessing the Control Plane you initialised first and running the same command whilst using the admin config located at `/etc/kubernetes/admin.conf`. If this works, then it's the firewall so you'll need to configure the firewall either on your nodes or the network to allow the appropriate traffic. If you've followed along with what I've done on Ubuntu in VMs, this should work by default. If it's not the firewall and you believe it to be KubeVIP then you can check via `crictl ps` that it's running. It may need reconfiguring and the manifest regenerating. I hope that helps!
@Drewbernetes got it i missed vip configuration, now I have seen all 4 videos, going to do that from scratch, btw the way ur explaining the concept is stupendous
I have one ques: For creating kube vip do we need to have separate node or we can assign any IP address with in our interface in main control plan ?
@@RajasekharSiddela The way Kube VIP works is it makes use of an IP address that exists on your main network. It doesn't require a node of its own as it runs in a pod within your cluster. By main network I mean the same one from which your nodes get an IP. For example, if your control planes and worker nodes have an IP address of 192.168.0.x then the IP KubeVIP uses should be on that same network (192.168.0.0/24). Just make sure it's not an IP address that is in use by something else. Hope that helps!
@Drewbernetes thanks for your quick response, I got one more doubt : I'm using RHEL 7.7 VMS for cluster creation, which is having cgroup v1 as default. Is it mandatory to have cgroup V2? If I'm going to use cgroup v1, so no need to change systemdCgroup in config.toml , I'm I right?
I hate app armor so much
I know of so many cases where rather than learning it, people disable app armor or selinux. I was guilty of it for so many years! But security is important and both of these are a link in the chain :-)
@@Drewbernetes I actually never see someone use app armor, it's the exam bring me to this. Maybe cuz we use cloud manage cluster and it's not so important? I really have no idea.
@@po6577 Honestly I think it's because people duck around security more than anything. With the amount of high profile breaches we've seen in the recent past, you'd think more people would be on top of it and practicing security-by-design, but the reality is that it is an afterthought. Even in the managed services, this is possible. Amazon, for example, have documented how to implement it (or use a 3rd party approach) to achieve this.
Wow. Thanks. Can you add one more topic? Audit logs and sending logs to maybe to opensearch using filebeat oss
I certainly can! I knew I'd missed something - Audit Logs! Way back in my configuring KubeADM cluster videos, I *think* I mentioned setting up Audit Logs later - and forgot to make a note to actually do it :facepalm:. I'll do a video on enabling Audit Logs in a couple weeks time so that we can actually make use of them. As for the Opensearch/Filebeat request - I can do this but it won't be as part of this series I'm afraid, it'll arrive in the "Kubernetes Next Steps" series I'll start up after this one is done. I wanted this one to be purely Kubernetes native tools with a view to expanding on additional tools like monitoring, shipping logs, CI/CD and more in a later series. I was going to use Loki for log viewing due to its easy integration into Grafana (as it's made by the same people) instead of OpenSearch as I have much more familiarity with that. However, I'm no stranger to ElasticSearch and let's face it they're one in the same!
Please do oidc and federation service please.
Hi! I shall be doing soon™ I want to get the security sections done and then I'll be moving onto admission controllers and probes. Once done, OIDC is next - maybe 5/6 videos away. I'll likely be using something like KeyCloak for the OIDC provider. As for federation - I'm still deciding what path to go down for this. When I started making my list of videos I would do this year at the back end of 2022, I intended to do a video on kubefed to cover federation. The problem is that project was archived in April 2023 and with it being no longer maintained, it didn't seem right for me to do a video on that. More here: groups.google.com/g/kubernetes-sig-multicluster/c/lciAVj-_ShE?pli=1 I'm looking into alternatives, the most viable of which seems to be Karmada - karmada.io/docs/ This isn't kuberentes-sigs project like kubefed, but it is inspired by the federation and kubefed projects. It is also part of the CNCF sandbox right now which gives me hope that it could be a good alternative. All that being said, I'll need time on that to figure out how it works before putting anything together for it. But I will be doing a video on federation asap.
@@Drewbernetes My colleague used kubelogin with keycloack (oidc). He can login using two ways. Browser redirect or directly providing password to command line. He wants to use now federation without browser redirect. Just login to k8a with username password as argument. . Now from linux you get a message that you need to redirect browser Do you think this is possible.? He wants to use AD account for automation. Not service token. As tokens are not bound to a person and can be shared . Great content. Soon i will do cks :)
Sorry, RUclips doesn't tell me about replies so I seem to miss them 🤦♂️. So with regards to the redirect, unfortunately some OAuth2/OIDC providers etc will require it as it's just the way the code & access token exchange happens. I use Authentic to authenticate with Vault and get the same experience.
“It was me!” 😂😂
Wishing you a great health, and hope everything is okay with you. Again great video Drew! Privileged learning from you.
Hi @juidas! Thanks very much! Yes I'm fine thankfully. It took a while for results but it turns out it was nothing to be concerned about in the end. They were just being over cautious to ensure there was nothing wrong with my lungs as it turns out!
@@Drewbernetes great to hear that! Wishing you a long healthy life filled with happiness
Hey Drew, You are the man! You have succeeded where others have failed...many thanks for the great tutorials, the others are all pretenders. Cleanest Kubernetes build to-date and I've played with the all K3S, Kubespray, Rancher and etc for months trying to find the best solution for us. Cheers
Thanks very much! I appreciate the comment and it's nice to know people are finding it useful. This series was always about helping people understand the basics and ensuring they have the skills needed for passing the exam (if they so choose to take it). The more in depth stuff will come later :-) For example I'll be expanding on these with things like Cluster API (CAPI) and maybe K3S (but I'm undecided on that) at some point. However, I have always been a firm believer that it's best to know what happens under the hood before going in with the tools that automate the building and maintaining of clusters, because when things inevitably go wrong, it's good to know how it all fits together at the ground level. I did the same when starting out too. I was looking at Juju, Rancher, Kubespray and all sorts and I just couldn't get my head around the infrastructure. In the end I just dropped it all and did it the hard way (via SystemD services) then moved onto KubeADM. I now use CAPI day-to-day and feel it's one of the best tools for manging multiple clusters. If I were only running one though I might not bother with it and stick with KubeADM when running clusters outside of a managed cloud.
CAPI looks interesting although keen to have a deep dive into the Gateway API. Am also keen to know how to use Cinder Plugin from CEPH cluster (and explore if possible to operate on separate network interface), I've implement Longhorn for now and have also had a look at Rook & OpenEBS but think they all are a little noisy to embed in a cluster as SDS and feel this stuff should be external to the nodes within a cluster (never been a fan of iSCSI). Anyway, have subscribed to you channel and look forward to future tutorials.
@@paulfx5019 I plan on diving into Gateway API on this tutorial series later down the line once I've had more time with it as I think it's going to be a huge step forward with regards to managing ingreds traffic. As for the cinder plugin, I use that in my job and it's really useful. It also has great support when using the Snapshot Controller so that backups (or snapshots) can be taken of Persistent Volumes in the case of disaster recovery. Thanks for the sub, great to have you on board!
Good, hey Drew, a question, I'm thinking about implementing Kubernetes in a medium scale on-premise project (10-100 physical nodes), what Kubernetes technologies do you recommend to implement it with? Kind of between k3s and k8s, etc.
Hi! So there are a couple of options here and it does depend on your underlying infrastructure to what would be best for you. For example, how would you be deploying your instances? IE are they bare metal, OpenStack? Something else? If you're using OpenStack for example, then I would look into the CAPI/CAPO (Cluster API and Cluster API Provider OpenStack) options as this makes manging cluster rather easy on the whole. I haven't played with K3S yet as I've not had chance to date but I intend to research and test it soon enough. My manager absolutely loves it (and used to work for Rancher) but I think either one of those is a good place to start. I wouldn't recommend manually installing clusters via KubeADM to be honest. It's good to know about it and how it functions but there are tools that wrap around it to make life easier now (which I'll get into in much later videos). CAPI/CAPO does have some really minor limitations around how much control over KubeADM you get, such as not being able to hide the control plane (which is supported in KubeADM but not in CAPI). I believe K3S does support this so if this is something that matters to you it's worth noting. If you do decide to go down the OpenStack/CAPI/CPO route, then take a look at the kubernetes-sigs/image-builder project on GitHub for building your Kubernetes images - I've recently contributed a feature to enable the building of images directly in OpenStack which should help you on your way there. I'd personally recommend looking at OpenStack for your instance management. It's stable and has good support within Kubernetes. Whatever you choose though, make sure it has good, maintained support around how the LoadBalancer Services are created, a supported and developed CSI and other core "cloud-like" components so that you're not having to build your own work-arounds into the mix. I hope that helps get you started and any other questions fell free to fire them my way.
@@Drewbernetes Thank you very much. I was talking to my team and as such it is bare metal, and the idea is to implement several containers with nginx services for the front end of the applications and wildfly for the back end. So we are thinking between what to use, if k3s, kops, kubespray or something like that and whether to use containerd or docker, I don't know what things you can recommend.
@@izidr0x770 No problem! So based on what you've said, you might find K3S to be the better option. KOPS only supports AWS and GCE officially and if I recall correctly, KubeSpray is a bunch of Ansible scripts and requires an orchestrator to mange the nodes. I will say if you're not using MAAS, OpenStack or anything else to manage the nodes then scaling the cluster will be a manual task which kind of goes against the flow of how you should be using Kubernetes. I'm not sure what your burst would look like but it's something to be aware of - which is why I recommended OpenStack as something to orchestrate the nodes. You can orchestrate bare metal nodes with OpenStack by the way, it's not just VMS ;-) kolla-ansible for OpenStack is a great place to start. K3S does support a HA or single node configuration so it's worth seeing which would best suit your needs. Remember ClusterAPI is an option too if you're using OpenStack, vSphere or any other orchestrator for your nodes. With regards to the container Runtime, containerd is likely the best way to go as the dockershim was deprecated and removed a few releases back. So if you're already considering containerd, go that way. All the Dockerfiles/images etc will work with it as Docker actually developed containerd and donated it to the CNCF! www.docker.com/blog/what-is-containerd-runtime/ If you are going to just go Bare Metal and you know your burst traffic for your app won't require the scaling features, then that's fine but it's good to be aware of it. Also, install MetalLB or Kube-VIP if you need any sort of external access to your app! I have mentioned Kube-VIP already in my videos and will touch on MetalLB and some point in the near future. An my final thought on it is this... as much as I'm an advocate of Kubernetes I will say that it's worth looking into whether Kubernetes is right for your project. Consider the features it provides vs the trade-off of manging the cluster itself. Sorry for the second essay! 😀
@@Drewbernetes Hi Drew, don't worry, I like your long answers, I know you take your time to make them and I appreciate it. I don't think I gave you enough context so that you could really recommend me something, and to tell the truth my knowledge about kubernetes and servers is basic, as such in the project I'm an assistant and I'm still learning. As such the project is being done in an educational institution that wants to migrate their servers to kubernetes, and as far as I understand and have been explained to me, the current server has different sections, some open to the public, which would be the part in production and other sections that are only available to developers, which would be the section of pre development, development and pre production, apart from the databases that handle the information of all students, teachers, etc.. Those of us who are involved in this task have been doing some research, but we still haven't defined exactly what to use, and the idea is to make an HA cluster and manage the database externally. Tomorrow they are going to explain to me a little more in depth how everything is going and, well, right now I was thinking of doing some research and, by the way, taking advantage of your knowledge to contribute adequately to the decision. And now that I think about it, I think I'm not entirely clear on the concept of bare metal, in this case as far as I have seen, the project is going to handle virtual machines that go inside the physical machines which are owned by the institution, so they would not be hiring machines in the cloud, and from what I was researching the concepts right now, I think that in this case it would not be bare metal, or so I think.
@@izidr0x770 aaaah the context helps! so yeah I suspect what will happen is they'll have their own blade servers, the bare metal nodes, that will run a hypervisor of some sort (OpenStack, vSphere et al) which will be responsible for deploying the VMs on which the Kubernetes clusters will be setup. That's a totally legit and sensible way of setting things up. I'd recommend in that case to look at Cluster API (CAPI) and either Cluster API Provider OpenStack (CAPO) if using OpenStack or Cluster API Provider vSphere (CAPV) if using vSphere. This allows you to have a tight integration with the hypervisor and enables things like auto scaling, Load Balancing and more. It allows you Kubernetes cluster to behave like it is in a cloud provider like AWS, GCP etc. With regards to the public/private setup, this again is a sensible effort. Having a Production and Staging cluster allows the following to happen. You'd have your code repository, such as GitHub, GitLab etc which hosts the code. Then you can do something akin to the following: 1. Create a Development/Release branch off of main 2. Each developer can then work in their own branch and when ready, merge it back into the Development/Release branch. 3. The staging cluster can target the Release branch using GitOps tools such as ArgoCD or Flux meaning it's kept in sync with what is in the Development branch. 4. Once testing is complete and you're happy to promote to production, merge your Development branch into main and create a tag/release. 5. Update your production cluster to target the new release 6. Sync development with main and start the cycle again. I think you've got an interesting path ahead of you and you'll learn ALOT playing with it in a real world scenario. Nothing beats learning things like this more than hands on work. I wish you all the best of luck with this and hope you gain a lot from it.