Cilium Kubernetes CNI Provider, Part 3: Cluster Mesh

Поделиться
HTML-код
  • Опубликовано: 15 окт 2024

Комментарии • 39

  • @bijanpartovi9768
    @bijanpartovi9768 2 года назад +2

    Great presentation and thank you for sharing!
    I have a question, in an inter cluster load balancing situation, is it possible to make client PODs to prefer the service in the local cluster first?

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  2 года назад +1

      Thank you very much Bijan!
      This is a great question, and I will pin it. I meant to talk about it in the video when I talked about opportunities and needed feature but forgot! Currently, Cilium does not have an "affinity" setting for setting the PODs to use a local cluster or region. I believe this is in the works but was not available at the time of this recording. If all your clusters are in the same data centers, this is not a big deal but if say one cluster is in New York and the other one is in LA then the response times become significant. You only want a POD in New York to hit a service in LA if the New York service is down.

    • @eduardoherrera5030
      @eduardoherrera5030 Год назад

      ​@@TheLearningChannel-Techcxxuhccccçc

  • @eBPFCilium
    @eBPFCilium 2 года назад +4

    Thanks for covering Cilium again, we will add it to our blog and newsletter

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  2 года назад +1

      Hi, I have a tutorial video describing Cilium's IP routing modes (Encapsulated and Direct (L2 and L3)) in case it would be useful for your audience. Thanks!
      ruclips.net/video/j2aox7K-7wU/видео.html

    • @magrag1987
      @magrag1987 8 месяцев назад

      ​@@TheLearningChannel-Tech❤

  • @rouabahoussama
    @rouabahoussama 2 года назад

    I cant beleive this is free.
    Really this series is a great Job.
    Thank U so much for sharing with Us.
    I don't who you are but I want to say: Thank U so so so much.

  • @ramprasad_v
    @ramprasad_v 2 года назад

    Thank you for great explanation

  • @cajgazachar
    @cajgazachar 2 года назад

    Looking forward to see a video about Cilium Service Mesh (it is in GA with the latest release)

  • @lucian1094
    @lucian1094 Месяц назад

    Very nice videos sir, thank you for the content!
    I have a question, when we will renew kubernetes certificates, do we need to reconnect the clusters?

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Месяц назад

      Hi, generally if there changes to certificates, it may require to re-mesh the clusters. If you run into communication issues between clusters, you should re-mesh them.

  • @buacomgiadinh1
    @buacomgiadinh1 2 года назад

    awesome video :)

  • @gauravpatel2005
    @gauravpatel2005 Год назад

    @TheLearningChannel I find your videos super valuable and deep dived. Could you create one for coreDNS in K8s?

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Год назад

      Hi, CoreDNS I covered in my Kubernetes services: ruclips.net/video/BZk2HUKsxAQ/видео.html

  • @hemantbali5076
    @hemantbali5076 Год назад

    Hi @The learning channel , i have a question why you dont have used cilium cli to install cluster 1 instead you used helm.
    2nd qustion is why you dont use --set kubeproxy=strict while installing cluster1. However in cluster2 you used cilium cli with --set kubeproxy=strict ?

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Год назад

      Hi, as explained in the video, at the time of recording, the Cilium Helm and "cilium install" methods of installation, each had a feature that was missing in the other. Helm provided the option to specify a CIDR range and "cilium install" had the feature to share Hubble certificate between the two clusters. Thus the a need to use two methods of installation to set up the cluster.
      --kube-proxy-replacement is an optional flag that specifies not to use iptabls and instead use ebpf for managing routes. This flag doesn't change the functionality of the cluster.
      Hope this helps.

    • @hemantbali5076
      @hemantbali5076 Год назад

      Thanks @The learning channel, could you please also confirm how we test end to end that we are using ebpf instead of iptables? Any doc link or video would be helpful.

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Год назад

      @@hemantbali5076 When initializing each cluster with kubeadm, pass in this switch: "sudo kubeadm init #--skip-phases=addon/kube-proxy " Then, when installing Cilium on each cluster, set ---kube-proxy-replacement as srict.

  • @jamilakassem8170
    @jamilakassem8170 Год назад

    What if we want to set up a third cluster? What would that look like? Also, can I attach a new cluster after the installation of cilium?

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Год назад

      The press of adding additional clusters to the mesh is the same. In this demo, I showed how to set up each cluster and how to mesh them but it doesn't mean you have to set up all your clusters at the same time. You can join additional clusters later, and follow the instructions, especially importing the "cilium-ca" secret.

    • @jamilakassem8170
      @jamilakassem8170 Год назад

      @@TheLearningChannel-Tech Thank you! You are a lifesaver. So basically, I will need to install cilium with cilium CLI and inherit-ca as you did on the second cluster.

  • @shuc1935
    @shuc1935 2 года назад

    The cilium cli offers option to set the non overlapping cidr ranges for cluster-pool-ipv4-cidr key , so the cluster mesh setup , IMHO , can be completed using cilium cli without having to switch between helm and cli

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  2 года назад

      Thanks for the tip, for the benefit of the audience, could you provide a sample script that has worked for you? Thanks.

    • @shuc1935
      @shuc1935 2 года назад

      # Cluster 1
      cilium install --cluster-name=cluster-1 --cluster-id=1 --context $CLUSTER1 --kube-proxy-replacement=strict
      cilium config set cluster-pool-ipv4-cidr 172.0.0.0/16
      # Cluster 2
      cilium install --context $CLUSTER2 --cluster-name=cluster-2 --cluster-id=2 --inherit-ca $CLUSTER1 --kube-proxy-replacement=strict
      cilium config set cluster-pool-ipv4-cidr 172.0.0.0/16

    • @shuc1935
      @shuc1935 2 года назад

      there is.a typo in the second cluster specific command , please use a different non overlapping cidr range

    • @shuc1935
      @shuc1935 2 года назад

      Corrected
      # Cluster 1
      cilium install --cluster-name=cluster-1 --cluster-id=1 --context $CLUSTER1 --kube-proxy-replacement=strict
      cilium config set cluster-pool-ipv4-cidr 172.0.0.0/16
      # Cluster 2
      cilium install --context $CLUSTER2 --cluster-name=cluster-2 --cluster-id=2 --inherit-ca $CLUSTER1 --kube-proxy-replacement=strict
      cilium config set cluster-pool-ipv4-cidr 10.0.0.0/16

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  2 года назад

      Thanks, I think this is a new feature in the later versions of Cilium, in the earlier version that I used during recording of this video, setting IP pool using "cilium install" was not available. I opened an issue for this on Cilium's Slack channel. If they've added this feature, then great and thanks for letting us know!

  • @jonassteinberg3779
    @jonassteinberg3779 7 месяцев назад

    I'd imagine inter-cluster load balancing is a feature mainly relevant to severely scaled environments? In my experiences 99% of shops have a small, medium or large cluster per environment so there really wouldn't be a need for inter-cluster load balancing. I have seen one dev environment made up of hundreds of very small clusters, but also in this case there is no need for inter-cluster load balancing. The cutover case also does not make sense to me: running concurrent clusters is going to be extremely expensive; then again: if the clusters are small then I doubt inter-cluster load-balancing would really matter? I could see a CICD pipe that's spinning clusters or in the lower environments has blue green clusters I guess, but I dunno...So is this a solution looking for a problem or what's the practical use case for this? Again I understand the feature, I'm just questioning its relevance. Stunning video per usual!

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  7 месяцев назад

      Hi,
      The main idea behind cluster mesh is for very large organizations with geographically dispersed clients who want to improve latency by serving clients from the centers closer to them. Imagine the multinational companies where they may have clients in Asia, Europe, and North America. Having all the infrastructure concentrated in the US will create latency for clients that are in the other regions. The other benefit is fault tolerance, if one region goes down, other regions could pick up the slack.
      So those are the main aspirations behind a cluster mesh. The load balancing part requires intelligent load balancing, i.e. route traffic from the clients in a region to the services in the same region. The fault tolerance part requires that a failed cluster automatically fail over to other healthy clusters. At the time of recording that video, Cilium hadn't quite provided those features yet and I made a point of mentioning that in the conclusion section of the video. I haven't had a chance to revisit Cilium's cluster mesh to see if they have made any improvements.

    • @jonassteinberg3779
      @jonassteinberg3779 7 месяцев назад

      @@TheLearningChannel-Tech Hm...I would think cloud providers would provide geographical load balancing via DNS to individual clusters in different cloud regions 🤔

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  7 месяцев назад

      @@jonassteinberg3779 Yes, that is true if you host them in the cloud.