Kubernetes With External ETCD | Using kubeadm to create HA Kubernetes with external ETCD nodes

Поделиться
HTML-код
  • Опубликовано: 24 сен 2024

Комментарии • 30

  • @jairuschristensen2888
    @jairuschristensen2888 7 месяцев назад

    These installation videos are incredible! I've spent the last two days following along with your three installation videos with my own cluster (and scripting the entire thing). There's no tricks, just you doing it with us, and there's something special about that. Like a humble class TA. Great job, keep it up!

    • @Drewbernetes
      @Drewbernetes  7 месяцев назад

      Thanks so much for the kind words!
      I wanted this series to be exactly that. I wanted to presume no prior knowledge and just take people through the whole process so I'm glad that it's come across that way!

  • @Intaberna986
    @Intaberna986 9 дней назад

    10:35 Mate I was going bonkers for a week trying to set up a HA cluster until I came accross your channel. You can edit as you see fit, I've gone through lots of videos and this is perfect.

    • @Drewbernetes
      @Drewbernetes  8 дней назад +1

      Haha nice one! Yeah, it was one of those scenarios where I considered chopping it out or recording it again but in the end thought: "Naaa, leave it in". It's good for people to see errors really - we all make them and anyone who pretends they are flawless on RUclips... well they're not :-D"
      Glad it helped though!

  • @madhavamelkote4554
    @madhavamelkote4554 2 месяца назад

    Brilliant video, absolutely perfect.. subscribed!!!

  • @zaheerhussain5311
    @zaheerhussain5311 7 месяцев назад

    Hi
    have you made a video on set up Kubernetes with external etcd cluster with VIP.

    • @Drewbernetes
      @Drewbernetes  7 месяцев назад

      Hi,
      This video does use a VIP for the Kubernetes cluster via KubeVIP and an External ETCD cluster. Do you mean to use a VIP for the External ETCD cluster itself? If so, I'm not sure if that's possible (or maybe recommended) to be honest due to how ETCD is intended to be used.
      KubeVIP just works as a real-world LoadBalancer would work in that it provides a single IP that you can use to hit any API endpoint.

  • @izidr0x770
    @izidr0x770 Год назад

    Good, hey Drew, a question, I'm thinking about implementing Kubernetes in a medium scale on-premise project (10-100 physical nodes), what Kubernetes technologies do you recommend to implement it with? Kind of between k3s and k8s, etc.

    • @Drewbernetes
      @Drewbernetes  Год назад +1

      Hi!
      So there are a couple of options here and it does depend on your underlying infrastructure to what would be best for you. For example, how would you be deploying your instances? IE are they bare metal, OpenStack? Something else?
      If you're using OpenStack for example, then I would look into the CAPI/CAPO (Cluster API and Cluster API Provider OpenStack) options as this makes manging cluster rather easy on the whole.
      I haven't played with K3S yet as I've not had chance to date but I intend to research and test it soon enough. My manager absolutely loves it (and used to work for Rancher) but I think either one of those is a good place to start. I wouldn't recommend manually installing clusters via KubeADM to be honest. It's good to know about it and how it functions but there are tools that wrap around it to make life easier now (which I'll get into in much later videos).
      CAPI/CAPO does have some really minor limitations around how much control over KubeADM you get, such as not being able to hide the control plane (which is supported in KubeADM but not in CAPI). I believe K3S does support this so if this is something that matters to you it's worth noting.
      If you do decide to go down the OpenStack/CAPI/CPO route, then take a look at the kubernetes-sigs/image-builder project on GitHub for building your Kubernetes images - I've recently contributed a feature to enable the building of images directly in OpenStack which should help you on your way there.
      I'd personally recommend looking at OpenStack for your instance management. It's stable and has good support within Kubernetes. Whatever you choose though, make sure it has good, maintained support around how the LoadBalancer Services are created, a supported and developed CSI and other core "cloud-like" components so that you're not having to build your own work-arounds into the mix.
      I hope that helps get you started and any other questions fell free to fire them my way.

    • @izidr0x770
      @izidr0x770 Год назад

      @@Drewbernetes Thank you very much. I was talking to my team and as such it is bare metal, and the idea is to implement several containers with nginx services for the front end of the applications and wildfly for the back end. So we are thinking between what to use, if k3s, kops, kubespray or something like that and whether to use containerd or docker, I don't know what things you can recommend.

    • @Drewbernetes
      @Drewbernetes  Год назад +1

      @@izidr0x770
      No problem!
      So based on what you've said, you might find K3S to be the better option. KOPS only supports AWS and GCE officially and if I recall correctly, KubeSpray is a bunch of Ansible scripts and requires an orchestrator to mange the nodes.
      I will say if you're not using MAAS, OpenStack or anything else to manage the nodes then scaling the cluster will be a manual task which kind of goes against the flow of how you should be using Kubernetes. I'm not sure what your burst would look like but it's something to be aware of - which is why I recommended OpenStack as something to orchestrate the nodes. You can orchestrate bare metal nodes with OpenStack by the way, it's not just VMS ;-) kolla-ansible for OpenStack is a great place to start.
      K3S does support a HA or single node configuration so it's worth seeing which would best suit your needs. Remember ClusterAPI is an option too if you're using OpenStack, vSphere or any other orchestrator for your nodes.
      With regards to the container Runtime, containerd is likely the best way to go as the dockershim was deprecated and removed a few releases back. So if you're already considering containerd, go that way. All the Dockerfiles/images etc will work with it as Docker actually developed containerd and donated it to the CNCF!
      www.docker.com/blog/what-is-containerd-runtime/
      If you are going to just go Bare Metal and you know your burst traffic for your app won't require the scaling features, then that's fine but it's good to be aware of it. Also, install MetalLB or Kube-VIP if you need any sort of external access to your app! I have mentioned Kube-VIP already in my videos and will touch on MetalLB and some point in the near future.
      An my final thought on it is this... as much as I'm an advocate of Kubernetes I will say that it's worth looking into whether Kubernetes is right for your project. Consider the features it provides vs the trade-off of manging the cluster itself.
      Sorry for the second essay! 😀

    • @izidr0x770
      @izidr0x770 Год назад

      @@Drewbernetes Hi Drew, don't worry, I like your long answers, I know you take your time to make them and I appreciate it.
      I don't think I gave you enough context so that you could really recommend me something, and to tell the truth my knowledge about kubernetes and servers is basic, as such in the project I'm an assistant and I'm still learning. As such the project is being done in an educational institution that wants to migrate their servers to kubernetes, and as far as I understand and have been explained to me, the current server has different sections, some open to the public, which would be the part in production and other sections that are only available to developers, which would be the section of pre development, development and pre production, apart from the databases that handle the information of all students, teachers, etc.. Those of us who are involved in this task have been doing some research, but we still haven't defined exactly what to use, and the idea is to make an HA cluster and manage the database externally. Tomorrow they are going to explain to me a little more in depth how everything is going and, well, right now I was thinking of doing some research and, by the way, taking advantage of your knowledge to contribute adequately to the decision. And now that I think about it, I think I'm not entirely clear on the concept of bare metal, in this case as far as I have seen, the project is going to handle virtual machines that go inside the physical machines which are owned by the institution, so they would not be hiring machines in the cloud, and from what I was researching the concepts right now, I think that in this case it would not be bare metal, or so I think.

    • @Drewbernetes
      @Drewbernetes  Год назад +1

      @@izidr0x770 aaaah the context helps!
      so yeah I suspect what will happen is they'll have their own blade servers, the bare metal nodes, that will run a hypervisor of some sort (OpenStack, vSphere et al) which will be responsible for deploying the VMs on which the Kubernetes clusters will be setup. That's a totally legit and sensible way of setting things up.
      I'd recommend in that case to look at Cluster API (CAPI) and either Cluster API Provider OpenStack (CAPO) if using OpenStack or Cluster API Provider vSphere (CAPV) if using vSphere. This allows you to have a tight integration with the hypervisor and enables things like auto scaling, Load Balancing and more. It allows you Kubernetes cluster to behave like it is in a cloud provider like AWS, GCP etc.
      With regards to the public/private setup, this again is a sensible effort. Having a Production and Staging cluster allows the following to happen.
      You'd have your code repository, such as GitHub, GitLab etc which hosts the code. Then you can do something akin to the following:
      1. Create a Development/Release branch off of main
      2. Each developer can then work in their own branch and when ready, merge it back into the Development/Release branch.
      3. The staging cluster can target the Release branch using GitOps tools such as ArgoCD or Flux meaning it's kept in sync with what is in the Development branch.
      4. Once testing is complete and you're happy to promote to production, merge your Development branch into main and create a tag/release.
      5. Update your production cluster to target the new release
      6. Sync development with main and start the cycle again.
      I think you've got an interesting path ahead of you and you'll learn ALOT playing with it in a real world scenario. Nothing beats learning things like this more than hands on work. I wish you all the best of luck with this and hope you gain a lot from it.

  • @RajasekharSiddela
    @RajasekharSiddela 9 месяцев назад

    @drewbernets hey Drew I'm when i test for cluster health im getting tcp dail connection refused, how to solve this

    • @Drewbernetes
      @Drewbernetes  9 месяцев назад

      Hi!
      If you're seeing something like Get "localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused then I would start by double checking your kubeconfig to ensure it's configured correctly. It should be pointing to the IP address or DNS (if you have configured one) for KubeVIP. You can target the kubeconfig directly by setting `KUBECONFIG=/path/to/config` or by adding the flag to your kubectl command `--kubeconfig=/path/to/config`.
      If you're seeing that error above but with the IP or DNS name you've configured then it could be a firewall issue or that the KubeVIP Pod isn't running.
      In this case, you can rule out KubeVIP first by accessing the Control Plane you initialised first and running the same command whilst using the admin config located at `/etc/kubernetes/admin.conf`. If this works, then it's the firewall so you'll need to configure the firewall either on your nodes or the network to allow the appropriate traffic.
      If you've followed along with what I've done on Ubuntu in VMs, this should work by default.
      If it's not the firewall and you believe it to be KubeVIP then you can check via `crictl ps` that it's running. It may need reconfiguring and the manifest regenerating.
      I hope that helps!

    • @RajasekharSiddela
      @RajasekharSiddela 9 месяцев назад

      @Drewbernetes got it i missed vip configuration, now I have seen all 4 videos, going to do that from scratch, btw the way ur explaining the concept is stupendous

    • @RajasekharSiddela
      @RajasekharSiddela 9 месяцев назад

      I have one ques:
      For creating kube vip do we need to have separate node or we can assign any IP address with in our interface in main control plan ?

    • @Drewbernetes
      @Drewbernetes  9 месяцев назад

      @@RajasekharSiddela The way Kube VIP works is it makes use of an IP address that exists on your main network. It doesn't require a node of its own as it runs in a pod within your cluster. By main network I mean the same one from which your nodes get an IP.
      For example, if your control planes and worker nodes have an IP address of 192.168.0.x then the IP KubeVIP uses should be on that same network (192.168.0.0/24).
      Just make sure it's not an IP address that is in use by something else.
      Hope that helps!

    • @RajasekharSiddela
      @RajasekharSiddela 9 месяцев назад

      @Drewbernetes thanks for your quick response,
      I got one more doubt :
      I'm using RHEL 7.7 VMS for cluster creation, which is having cgroup v1 as default.
      Is it mandatory to have cgroup V2?
      If I'm going to use cgroup v1, so no need to change systemdCgroup in config.toml , I'm I right?