Using Rancher For Creating And Managing Kubernetes Clusters

Поделиться
HTML-код
  • Опубликовано: 9 янв 2025

Комментарии • 176

  • @aliounediakhate8008
    @aliounediakhate8008 3 года назад +9

    5:08 the most important thing in this tutorial. Thanks for such a great content..

  • @DevOpsToolkit
    @DevOpsToolkit  3 года назад +8

    IMPORTANT: A new review of Rancher is now available at ruclips.net/video/JFALdhtBxR8/видео.html

    • @StephaneMoser
      @StephaneMoser 3 года назад +2

      Kubeadm + Terraform

    • @baumbaer
      @baumbaer 3 года назад +3

      Using Rancher for on premise Kubernetes Clusters. I used kubespray before. I Really like the user Management and the integrated logging/monitoring options.

    • @oftheriverinthenight
      @oftheriverinthenight 3 года назад +2

      Kubespray at work
      RKE1 at home, RKE2 have containerd, but there is no guide or option to update at the moment (git hub issue 562 on rke2). So probably next installation coudl be also kubespray

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +2

      @@baumbaer Rancher for on-prem is a no-brainer. It is probably the best option we have, especially in the "free department".

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +3

      @@oftheriverinthenight That's what i understood as well. It should be available in the next release or, at least, very soon.

  • @BKearal
    @BKearal 3 года назад +11

    You might want to mention that it also offers a single point of access into multiple clusters via the rancher proxy for kubectl. This is great for centralized access control that can even be per namespace etc. without deploying anything special onto the clusters itself.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +3

      Oh yeah. I forgot to mention that one. That is indeed a very good feature.

    • @mtik000
      @mtik000 3 года назад +3

      This is why we plan on keeping "Rancher" even though our clusters have moved to EKS. Appears to be much easier to handle RBAC than deal with a bunch of IAM roles/users/etc.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +4

      @@mtik000 Why did you do that? Why did you mention IAM? Whenever I hear that word, I have nightmares and cannot sleep.

    • @mtik000
      @mtik000 3 года назад +1

      @@DevOpsToolkit hah! sorry :) i need to offload my burden to someone else

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +3

      @@mtik000 No worries. I'll have a gin&tonic. That usually fixes it.

  • @junejuan8561
    @junejuan8561 3 года назад +6

    Hi,
    Just to answer some of the CONS part
    1. Rancher has k3s and rke2 which uses containerd by default and I think they are moving away from rke (the first version) anytime soon.
    2. For ingress controller nginx was already enabled just go to the default namespace>load balancing
    For the storage you have a lot of options in their apps section but rancher recommends longhorn.
    3. Again just use k3s and rke2.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +2

      2. Ingress was not there. I checked the Services and it wasn't there. Storage is there but you need to fiddle with a bunch of options and hope that those available are supported with your provider. While I do think that you should be able to fine-tune storage, it's silly that it does not come with at least a single SC set to default as with literally any other Kubernetes distribution.
      1. 3. I agree, but Rancher needs to make it prominent and not hidden. If you just follow what it suggests, neither RKE2 or k3s are there, at least not today.

    • @BKearal
      @BKearal 3 года назад +2

      Ingress controller definitely does deploy with rancher launched rke clusters. I have multiple clusters deployed via rancher that did not require an extra step for this.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      It's possible that it doesn't with DigitalOcean. Let me double check again. Back in 30 min...

    • @BKearal
      @BKearal 3 года назад +1

      @@DevOpsToolkit Would be interesting to know. I've just been using it with bare metal nodes so far, which as you mentioned is a great use case for it anyway.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      @@BKearal For some reason, Let's Encrypt decided not to work so I cannot create certs for the new cluster where I wanted to install Rancher (connecting to new clusters doesn't work without certs) and double-check whether Ingress is indeed installed.
      Nevertheless (until Let's Encrypt gets back to work), here's the part of the video where I'm confirming that Ingress is not there: 17:11 .

  • @DanielRolfe
    @DanielRolfe 3 года назад +9

    For on prem rancher also has rbac with AD which is super nice , also another rancher project worth looking at is LongHorn distributed storage which again is amazing if your on prem storage isn’t anything super special

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +3

      LongHorn is indeed a very interesting (and good) project. I'll probably make a video about it soon.

    • @AsifSaifuddinAuvipy
      @AsifSaifuddinAuvipy Год назад +1

      And Harvester

  • @MohamedBelgaiedHassine
    @MohamedBelgaiedHassine 3 года назад +7

    Rancher has not only RKE as a Kubernetes distribution but also K3s and RKE2, which both are NOT based on Dockershim, but on containerd. Second thing is: Kubernetes is deprecating Dockershim as part of the Kubernetes project, but Dockershim will continue to exist as a plugin, like some other CRI plugins. It will be maintained by Docker Inc, Mirantis and Rancher. So, the comment about Rancher not being up-to-date is irrelevant.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +3

      As far as I know, RKE2 was no GA and integrated with Rancher at the time I recorded that video. I'll add it to my TODO list to review it again and create a new video about Rancher.
      As for Docker in Kubernetes (and Docker Shim), it's clear that there is no future for it in Kubernetes clusters. There is no good reason why anyone would use Docker as a container engine except for legacy reasons. Docker Shim exists only so that people who did things that should not be done can prolong the inevitable remediation of past mistakes. Even RKE2 removed Docker.

  • @sukurcf
    @sukurcf 3 года назад +1

    5:11
    Loved it how you switched to Dark mode

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      That's the first thing I do in every app :)

  • @RaviSharma-vw7py
    @RaviSharma-vw7py 2 года назад +1

    I think you have explained very nicely thanks u so much from India...

  • @jmmtechnology4539
    @jmmtechnology4539 Год назад +2

    Very interesting, thanks for the video!

  • @evilqaz
    @evilqaz 3 года назад +11

    I love rancher :)

  • @jiaxinshan6753
    @jiaxinshan6753 3 года назад +1

    I finally get why people love your videos. You do point out those terrible features/bugs. I really love this kind of hands on experiences rather than those apopathetic tutorials

  • @DaiquiriFlavour
    @DaiquiriFlavour 3 года назад +1

    Very authentic and helpful video! Thanks!

  • @MrTheGurra
    @MrTheGurra 3 года назад +4

    I don't use rancher for cluster management or creation. Like you say, it feels unnecessary when using managed clusters on DO, Google, AWS etc. However I always attach rancher post creation to manage RBAC, ingress, and get a basic UI overview of what is going on.
    If there was one thing I still feel like rancher does best it is RBAC management (linking to github, AD, etc).
    Also for getting some quick templates up for basic apps and visualizing where things are located, editing configs and secrets and so on, it is very handy. On the other hand that kind of works against git-ops, so :D. back to pros and cons..

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      You're right. RBAC is really good with Rancher, and I feel silly for not even mentioning that in the video.
      Editing anything through Rancher Web UI is great if you are a small team. But, as soon as things scale, both in people and operations, operating something from a Web UI becomes dangerous and unproductive.

  • @Peter1215
    @Peter1215 3 года назад +5

    Really interesting video and also on time for me. I've been working with Rancher a few years back (on prem) and will go back to working with it towards the end of the year. The setting with docker and the fact that it uses post-provisioning installs on node pools made me wince a bit. I hope they will fix it soon.
    Personally I prefer CLIs over UIs, so the dashboard view is not a killer feature for me. Still I prefer provisioning with terraform and am also exploring Crossplane and cluster API.
    Could you make a video about fully automated provisioning lifecycle including Day2 OPs? There are plenty of videos about how to start, but rarely good ones that dive into Day2 specific challenges. Thanks for great vid as always :)

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +3

      Personally, I think that the main value of Rancher is in it's Web UI. Those who prefer working with CLI or IaC (me included) are probably better off without Rancher. RKE alone managed through a CLI or IaC is probably enough.
      Adding "automated provisioning lifecycle including Day2 OPs" to my TODO list... :)

  • @Flyingnobull
    @Flyingnobull 3 года назад +4

    Most important thing that everyone should do: change it to dark mode! YES!

  • @DukeofTech90
    @DukeofTech90 6 месяцев назад +1

    Please help. I have an interview and I was given task to do . I've done most of it but I'm having issues with Rancher cause I'm to use it for my kubernetes deployments. So the problem is that I can't find the add nodes button on the UI

  • @lightspeed79
    @lightspeed79 8 месяцев назад +1

    One thing to be careful about that I struggled to figure out is that it installs on a docker IP (in example: 172.16.0.5)and not on the localhost 127.0.0.1, therefore if you try to access it via the localhost or 127.0.0.1 it might not work. Had to spend many hours to see this is how it worked, since the rancher academy or other tutorials stated otherwise.

  • @cyberlord64
    @cyberlord64 2 года назад +1

    5:29 0.6/5.8 cores? 05.8? Am I missing something here? What is 0.8 cores exactly?

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      0.6 is how much CPU is used while 5.8 is how much CPU is allocatable. Rancher does not see how much memory and CPU a node has. Instead, it sees what Kubernetes sees which is allocatable resources. That's always less than physical CPU and memory since a bit is taken by system processes.

    • @cyberlord64
      @cyberlord64 2 года назад +1

      @@DevOpsToolkit interesting. I wonder what was the thought process behind the decision of calling this "cores" as opposed to something more abstract as "resources".

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      It is, in a way, cores, but those available to Kubernetes.

  • @spy.catcher
    @spy.catcher 3 года назад +1

    would also be interested in showing us how best to utilize and implement tailscale/taildrop in your preferred cluster setup and config..tnx

  • @codecoffee-farsi3392
    @codecoffee-farsi3392 3 года назад +2

    What's your idea about Running Rancher 2.x on a three node on-premise cluster? VM specification (4CPU, 16GB)

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +3

      For control plane nodes, 2 CPU and 4 GB RAM should be enough for a starter, unless you plan to combine control plane and worker nodes.
      Now, worker nodes are more complicated and no one can answer that question since it depends on the workloads you'll have in that cluster. You'll probably lose 1 CPU and 1 GB RAM (or less) on system-level processes and the rest depends on your workloads (apps).

  • @GeertBaeke
    @GeertBaeke 3 года назад +4

    Would be interested to know what you think of VMware Tanzu? 😀

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +3

      Adding it to my TODO list... :)

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      It took a while to move Tanzu to the top of my TODO list, but now it's finally done and available at ruclips.net/video/iO6ZLCrN8kA/видео.html.

  • @jaysistar2711
    @jaysistar2711 3 года назад +1

    I'm still not able to retire Docker Engine for some nodes. While containerd is used for k8s pods, there are a few labeled nodes that have Docker Engine as well bind mounted named pipe to it for build agents. Kanako didn't work for multistage Dockerfiles when I tried it (a few months ago). Do we have a non-Docker Engine way to build container images, yet, that works for multistage Docker files?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      I had the same problem with multi stage builds but that was fixed (at least in my case) a while ago. Your situation might be an edge case so I suggest opening an issue in the Kaniko project.

  • @kingroc3651
    @kingroc3651 2 года назад +1

    My lab has no internet connections. Does this work for this kind of environment? I see the nodes need to download images from internet.

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      I think that you can configure it to use image from any registry. If that's true, you can setup a local registry, download images (one way or another), and push them to that registry.

  • @mustaphanaji2523
    @mustaphanaji2523 2 года назад +1

    Is there any reference architecture for an active active between on prime and cloud rancher cluster?

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      I'm not sure I understood the question.

  • @prasathl1997
    @prasathl1997 2 года назад +1

    In varibale RANCHER_ADDR what you given?

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +1

      It's in the Gist that accompanies the video. You can find the export command in gist.github.com/vfarcic/a701b929d1416b095bd58daa24f8b013#file-82-rancher-sh-L54.

  • @TeresaShellvin
    @TeresaShellvin 7 месяцев назад +1

    can u pls make a vid on how to upgrade helm charts using cicd or how to automate the deployment of helm charts.

    • @DevOpsToolkit
      @DevOpsToolkit  7 месяцев назад +1

      I tend to use argo CD or flux for that. You'll find quite a few videos with those on this channel.

    • @TeresaShellvin
      @TeresaShellvin 7 месяцев назад +1

      @DevOpsToolkit today interviewer asked me " how do u upgrade helm using automation or have i ever used cicd for upgrading helm " , i prefer argocd tbh , i introduced argocd in my previous organizations as well.

    • @DevOpsToolkit
      @DevOpsToolkit  7 месяцев назад +1

      @TeresaShellvin essentially, you just need to change the tag in values.yaml and push changes back to git. From there on, either argo CD does the job or, if you're not using it, helm upgrade does the trick.

    • @TeresaShellvin
      @TeresaShellvin 7 месяцев назад +1

      @@DevOpsToolkit awesome , thank u so much

  • @mahdirashki6752
    @mahdirashki6752 3 года назад +2

    Thank you so much for your video but there are more advanced options for K8's cluster lifecycle mangers like CAPI or Hive that have MachineSet/MachineDeployment/MachineConfig and cluster auto-scaler options..

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      Oh yeah. There's much more cluster managers can do. Still, in my experience, that level is usually combined with everything-as-code, stored in Git, and managed with a different type of tools. Rancher is mostly for those who prefer using a Web UI and, more often than not, the majority of such users do not go deep.

  • @TovergO
    @TovergO 3 года назад +1

    About immutable images, are there any pre-built, up-to-date kubernetes vm images ready for use ?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +2

      As far as I know, there aren't any as far as self-managed Kubernetes clusters are concerned. I hope that will change once RKE2 comes into Rancher.

  • @Textras
    @Textras 3 года назад +1

    Excellent video.

  • @rabigurung7188
    @rabigurung7188 2 года назад +1

    Is it possible to add Raspberry PI (ARM64) as a node in the RKE cluster? I get stuck in the node provisioning stage when I try to add a Raspberry PI as a node in the RKE cluster.
    Much appreciated.

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +1

      Unfortunately, I haven't tried it with Raspberry Pi so I cannot say whether it works there or not. My best guess is that it does since it's based on k3s which does work with Pi, but I cannot confirm that.

    • @rabigurung7188
      @rabigurung7188 2 года назад +2

      @@DevOpsToolkit Thanks.

  • @gameprofitsGalactic
    @gameprofitsGalactic 3 года назад +1

    Brother, clearly you have to know , only the coding and cluster junkies like me love rancher :) Well done video

  • @subzizo091
    @subzizo091 2 года назад +1

    how to fix below error:
    Error: chart requires kubeVersion: < 1.25.0-0 which is incompatible with Kubernetes v1.25.3+k3s1

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      Are you refering to gist.github.com/vfarcic/a701b929d1416b095bd58daa24f8b013#file-82-rancher-sh-L24?

    • @subzizo091
      @subzizo091 2 года назад +1

      @@DevOpsToolkit no the helm chat version my current k3s version is 1.25 which is not compatible with the chart

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      That's common. Vendors tend to be 2 minor versions of k8s behind (approx.). However, most vendors do tend to work on transitions away from deprecated features much earlier since deprecations in k8s tend to last for at least a year. Rancher might have failed to do that and you might need to wait or downgrade your k3s version until than.

  • @gcezaralmeida
    @gcezaralmeida 3 года назад +1

    Thank you for you video. I like to much. You are very updated. Could you create a video comparing OS distro to run Kubernetes on-premise? Which is the best one?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      Adding it to my TODO list... :)

    • @Peter1215
      @Peter1215 3 года назад +1

      Seconded, I'm going to start working with k8s on prem more (I have been working with AKS for almost 2 years now) and I'm interested in what distros would be best. Also RKE seems like a great choice for on-prem

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      @@Peter1215 RKE is indeed a great choice for k8s on-prem. It might easily be the best choice, at least among free options.
      I'll do my best to bump "k8s/os distros on-prem" topic closer to the top of my TODO list.

  • @Flyingnobull
    @Flyingnobull 3 года назад +1

    hey Viktor - could you make a video on k8s security applications such as stackrox & twistlock? How necessary they are, are they replaceable with other measures, effect on cluster performance etc.?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      Great suggestions! Adding them to my TODO list... :)

  • @nguyenquang5216
    @nguyenquang5216 2 года назад +1

    Hello Admin,
    I install rancher on my local K8s cluster (1master node + 2 worker node). I build this cluster from scratch with 3 ubuntu VM on VMware workstation. Each VM have 2 NIC (1public + 1private)
    I run follow command from your script:
    # If NOT EKS
    export INGRESS_HOST=$(kubectl \
    --namespace ingress-nginx \
    get svc ingress-nginx-controller \
    --output jsonpath="{.status.loadBalancer.ingress[0].ip}")
    #then
    echo $INGRESS_HOST
    >> however the result still blank.
    Can you suggest me on how to solve it?
    Thank you very much!

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      If it's an on-prem Kubernetes cluster, Ingress service probably cannot be load balancer type. That would result in creation of an external lb which would (probably) not work. Instead, you need to change the service to be NodePort. That will open a port on you cluster nodes. After that, you need to configure whichever lb or proxy (e.g. nginx) you're using to forward requests to IPs on the nodes that that port. As an alternative, you can skip the lb/proxy altogether and use IP of one of the nodes and the port of the ingress service directly.

    • @nguyenquang5216
      @nguyenquang5216 2 года назад +1

      @@DevOpsToolkit thank you very much.
      I just make it with 2 way:
      + use the Alternative solution as you told me that is nodeport type of ingress services
      + the second way is i use metallb for loadbalancing of cluster.
      Thanks alot :)

  • @squalazzo
    @squalazzo 3 года назад +1

    i had issues with it, on a local lab machine (32gb ram, i7, 480ssd): some services refuse to start up, the kubectl "web gui" gives error 1006 and does not show anything... looking for alternatives, what do you suggest to create a local test lab? I put proxmox on this machine, and created 6 vms using ubuntu 20.04, 3 master and 3 workers, having the host (proxmox is based on debian 10) sharing its own disk space as nfs share... but i'd like to move away from rancher, so, suggestions? :)

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +3

      If it's for a local lab, I strongly recommend k3d. For a while now, it's the only local k8s I'm using. Check out ruclips.net/video/mCesuGk-Fks/видео.html ...

    • @squalazzo
      @squalazzo 3 года назад +1

      @@DevOpsToolkit yup, already watched that (all your latest 6 months videos, really), thanks!

    • @tdeutsch
      @tdeutsch 3 года назад +1

      @@DevOpsToolkit Without having seen the video yet: Why k3d and not k3s on a VM or k3os?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +2

      @@tdeutsch k3d is k3s running inside containers. As a result, it is much faster and requires less resources than VMs, especially if you try to run a multi-node cluster.

    • @tdeutsch
      @tdeutsch 3 года назад

      @@DevOpsToolkit I was under the impression k3d is "k3s in docker" and not "k3s in container". So I was like you: "docker!? whywhywhywhywhywhy" :-) Last weekend, I discovered I have podman on my router Iand I can make him run other containers than "only" his GUI. So I gave the rancher/k3s image a try and used it there with podman :-)

  • @alvarotorres3529
    @alvarotorres3529 3 года назад +1

    Great content! What do you think about Gardener? Do you think it is a good choice for running a KaaS on top of OpenStack? Thanks

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      I haven't been using OpenStack for a long time now, so I cannot comment on a specific combination between the two. That being said, Gardener is great, but it also has its own issues.
      The short answer: Gardener is good
      A longer answer: I have it very high on my TODO list and a detailed video is coming soon :)

  • @Yrez1234
    @Yrez1234 3 года назад +1

    Nice video viktor! What are alternatives to Rancher in order to monitor multiple Kubernetes clusters in cloud using a single UI. I prefere using IaC to manages multiples clusters (terraform, argocd) , but what about monitoring and visualizations of all clusters? Grafana, and service mesh UI give us this kind of details for a dedicated cluster, but it would be useful to have a unified UI to check health of all clusters, manage alerts... Did you already explore this kind of tool?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +2

      One of my complaints or, to be more precise, lost opportunities in Rancher is that it does not have cross-cluster dashboard. Everything is still based on single-cluster views and the only thing it gives you are links to each of those clusters. It would be awesome if Rancher would provide some kind of unified view of all the clusters.
      If you use IaC, you probably do not use dashboards to manage clusters but mostly for monitoring. The best bet is to ship metrics from all the clusters to a single DB. That could be Prometheus with Thanos or one of SaaS offerings like DataDog.

    • @Yrez1234
      @Yrez1234 3 года назад +2

      @@DevOpsToolkitThanks! Yes, I mainly use it for monitoring. I'm not aware of Thanos, I will have a look to this project.
      It could also be good topic to cover: how to monitor multiple Kubernetes clusters.

    • @tdeutsch
      @tdeutsch 3 года назад +2

      @@DevOpsToolkit ​@UCHkbuUtgCl2wWzvE1D4csjA I'm not aware of a Multicluster Dashboard with a unified view. And tbh, I don't think I would need one, from a business perspective. I would have to separate my customers anyway, if it's a shared "Dashboard". For people managing multiple clusters beeing cli addicts, maybe k9s is something to look at.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +2

      @@Yrez1234 The problem with Prometheus is that it does not scale. Thanos solves (or tries to solve) that problem.

    • @Yrez1234
      @Yrez1234 3 года назад +1

      @@DevOpsToolkit Got it! Thanks for the answer

  • @mr_wormhole
    @mr_wormhole Год назад +2

    K9S gang rise up

  • @sf2998
    @sf2998 3 года назад +2

    is Rancher the best tool for creating and managing multiple clusters, or is there a better option?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      That depends on whether you prefer to use a web UI for those tasks or IaC/CLI. If it is the former, Rancher is a good choice. If it's the latter, use Terraform, Pulumi, or Crossplane.

    • @sf2998
      @sf2998 3 года назад +1

      How about Lens IDE?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      @@sf2998 Lens is a UI for managing resources inside a k8s cluster, not for managing clusters themselves.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      Just published a review of Lens.
      ruclips.net/video/q_ooC1xcGCg/видео.html

  • @SumanChakraborty0
    @SumanChakraborty0 3 года назад +1

    Mirantis-k30s would improve Rancher k3s limitation by providing Cri-O as the runtime alternative, in addition providing integration with multiple csi storages

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +2

      I think they already addressed that in RKE2. I just hope that it will be available in Rancher soon.

    • @tdeutsch
      @tdeutsch 3 года назад +1

      Which k3s limitation do you mean? K3S is not docker, it's containerd.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      @@tdeutsch I think that he mixed k3s with Rancher.

    • @tdeutsch
      @tdeutsch 3 года назад

      @@DevOpsToolkit maybe. But comparing k0s and k3s would make sense. Do you by chance have a video comparing those two? Afaik they share the same goal and it would be nice to have a comparison of them

  • @eamonnmccudden1070
    @eamonnmccudden1070 3 года назад +4

    Can you do an Openshift video(s)?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +4

      Adding it to my TODO list... :)

    • @eamonnmccudden1070
      @eamonnmccudden1070 3 года назад +1

      @@DevOpsToolkit looking forward to it! Thanks as always

  • @m19mesoto
    @m19mesoto 3 года назад +1

    KOPS? What do you think?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      It lost its purpose the moment EKS went public.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      As a side note, RUclips deleted the comment you made about branching strategies. It tends to do that when there are links. Can you post it again but without ?

  • @vladtarasenko1363
    @vladtarasenko1363 Год назад +1

    thanks for the video

  • @hazi.m
    @hazi.m 3 года назад +1

    If the main "pro" for rancher are on-prem installations, how does it compare to OKD ? as that should be free OpenShift without RedHat support right? Would OKD be a better option for on-prem ? you would get Storage management, mutable provisioning, non-docker setup in addition to all the stuff rancher provides

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +2

      I haven't used okd enough to compare it. At least not yet. I haven't seen it much in the field either. Most of the companies I worked with are either using OpenShift, Rancher, or something else that is not okd. That being said, the number of companies I worked with is limited and that does not mean that okd is not widely used but rather that I haven't been around it.

    • @hazi.m
      @hazi.m 3 года назад +2

      @@DevOpsToolkit I've had the same experience. Most companies that need and have the workforce to use/understand OpenShift are willing to pay for it, otherwise they go for Rancher or home built solutions as you mentioned.
      I'm also trying to figure out why I haven't seen OKD being used as compared to other free/low-cost alternatives. Complexity could be one reason, but I wonder if it is worth it if you (or your team) are able to handle that.
      Anyways would love to see your videos on OpenShift and okd :)

  • @zenmaster24
    @zenmaster24 3 года назад +1

    is this rancher giving up, or suse pausing development after acquisition?
    what is the better free cross-provider ui management alternative (including on-prem and cloud clusters)?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +2

      I do not think there is a better free cross-cluster UI solution. Most of the work in that area is around IaC tools rather than UI-based.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      Also, I doubt that SUSE is pausing anything. They would not acquire Rancher if they do not have plans. My guess is that they are reorganizing it instead. Also, SUSE needs to figure out a revenue stream for Rancher and my best guess is that it is not going to be Cloud since that is already dominated by others. It will more likely be on-prem as a milking cow and edge as the future. Those are purely guesses though since I do not have any inside information.

    • @zenmaster24
      @zenmaster24 3 года назад +1

      @@DevOpsToolkit its a decent guess, but may not make them the revenue they expect - most on-prem kube clusters that i have seen in large orgs are openshift, which has its own ui.
      re-organizing could also introduce an unintentional pause in development, as things are being changed

  • @holgerwinkelmann6219
    @holgerwinkelmann6219 2 года назад +1

    By all the comments about the glory of cloud managed k8s, many many user can and will not running on public cloud, at least not our customers, non of them are allowed to run there. I would rather prefer you are fair and focus on the rancher use case, Which is mostly on prem. And then compare with other on prem alternatives

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +1

      I do agree that rancher is mostly for on-prem users and I believe I said that in the video. Nevertheless, rancher marketing claims for on-prem and cloud so both are valid choices from their perspective.

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      As a side note, the comparison with other alternatives is coming. I just released a video about tanzu, soon comes openshift, and comparison of th three after that.

    • @holgerwinkelmann6219
      @holgerwinkelmann6219 2 года назад +1

      @@DevOpsToolkit sure. Marketing must cover this, but If I would have a public cloud only strategy, I would not bother with the rancher candidates. I would make a crossplane composition for infra, cluster, services and applications composed from cloud APIs. But, hey, what you do if you don’t have it, if you want it must run on edge, bare metal, on prem etc as all our customers must?? there are not much alternatives, you either build your self a CAPI based Plattform and provide your own CAPI provider package for composition. API wise not really complicated, just work ;). But our biggest BUt??? Who provides you maintained node images or k8s distributions you can rely on??? The example images coming from its CAPI are ok for testing, but production??? Specifically for 2nd day ops. Rancher at least provides the RKE distribution or Images.
      May you can make a comparison of the alternatives. Imho this would be the following short list, but you Might be aware of others ?
      * rancher + RKE2
      * RH openshift
      * Microsoft, Kinfolk Lokomotive
      * Tanzu ?
      * Gardener
      * Mirantis
      * kubermatic
      * ….
      * plain DIY CAPI

    • @holgerwinkelmann6219
      @holgerwinkelmann6219 2 года назад +1

      @@DevOpsToolkit I’m carry on with testing rancher 2.6 with some RKE2 (containers) tech preview clusters. ;) nice weekend!

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      RKE2 (not 1) is one of the best OSes for on-prem Kubernetes.
      Thanks for the list. I did not have lokomotive on mine. Adding it...

  • @RoyOlsen
    @RoyOlsen 3 года назад +1

    Weird how people think Kubernetes is something you should buy from a public cloud provider. Easy, yes. But so expensive.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      It all depends on cost analysis. Hosted Kubernetes (e.g., GKE, EKS, AKS) can reduce operations. You're paying for a service and that might or might not be cost-effective depending on your skill level, needs, etc.

    • @RoyOlsen
      @RoyOlsen 3 года назад +1

      @@DevOpsToolkit Any particular reason you erased my comment? Don’t care for insights?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +2

      @@RoyOlsen I never deleted anyone's comment. RUclips, on the other hand, tends to delete comments automatically, especially if they contain links. Please try again and if that fails, send me a DM on twitter (@vfarcic) or linkedin and I'll publish the comment for you.
      In any case, I'm sorry your comment was deleted. Unfortunately, I do not have any means to control RUclips policy.

    • @RoyOlsen
      @RoyOlsen 3 года назад +1

      @@DevOpsToolkit Strange. It was a fairly long comment, but no links and nothing impolite or terribly controversial. All right then, thanks for the the reply, appreciate it.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +2

      @@RoyOlsen RUclips algorythm is a mistery and their policy with comments is very frustrating. I wish there would be something that I can do but, after days spent with their support, my conclusion is that there isn't anything I can do :(

  • @m19mesoto
    @m19mesoto 3 года назад +1

    I think, I can sense Suse influence already :D
    The new interface v2 is terrible, anyway I really like Rancher in some kind..

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      Rancher has a special place in my heart. It helped me a lot when I was less experienced and Kubernetes was much less mature. For a while, it was, without doubt, the best way to create and manage k8s clusters. In the meantime, k8s got much better and managed Kubernetes services (e.g. EKS, AKS, GKE, etc.) got better over time. As a result, Rancher started having less and less differentiating features.

  • @yuewang7854
    @yuewang7854 3 года назад +3

    docker: whywhywhywhy? lol

  • @TeresaShellvin
    @TeresaShellvin 7 месяцев назад +1

    docker

  • @stormrage8872
    @stormrage8872 3 года назад +1

    I hate rancher and all it does, it's way too intrusive in the communication between the control plane and the nodes. We got left out with no management working on production clusters for a while until we decided to redeploy everything from scratch without Rancher. If you don't pay for support, it's a ticking bomb

    • @DavidBerglund
      @DavidBerglund 3 года назад

      I totally agree. Switched to MicroK8s as we were running Ubuntu anyway. Manage workloads like with any cluster and have a helpful CLI for cluster management. And, optionally, enterprise support!

  • @lavishly
    @lavishly 3 года назад +1

    DO NOT use Rancher. Sad it sold. It took yrs of my life in frustration and stress. Team sucks. Arrogant and not helpful. Left it and never looking back!!!

  • @tdeutsch
    @tdeutsch 3 года назад +3

    Rancher deploys ingress. You just "using it wrong" :-D
    In the video, you are enabling ingress but not the default backend. Therefore, you can not see a service. Because thats the only service you would see for ingress:
    $ kubectl get service -A | grep ingress
    ingress-nginx default-http-backend ClusterIP 10.43.160.19 80/TCP 288d
    However, even without the default backen, you should have ingress. Please check this:
    $ kubectl get pods -A | grep ingress
    ingress-nginx default-http-backend-6977475d9b-4km5v 1/1 Running 0 24d
    ingress-nginx nginx-ingress-controller-mjblv 1/1 Running 0 24d
    ingress-nginx nginx-ingress-controller-sld26 1/1 Running 0 24d
    ingress-nginx nginx-ingress-controller-thbcc 1/1 Running 0 24d
    kube-system rke-ingress-controller-deploy-job-4c4hq 0/1 Completed 0 24d
    You should have the ingress-controller and should be able to create ingresses. But you do not have the "default backen" service.
    Regarding docker: I agree mostly. However, two notes on this: I) dockershim will be supported by 3rd Parties for a longer period. II) While RKE depends on Docker they already have K3S and RKE2 which are not. Therefore, one should consider RKE as a "soon to be replaced" product. The pitty is, RKE2 and K3S are supported by Rancher for beeing imported and managed (and upgraded), but they do not support creating clusters with them out of Rancher. Hopefully this may come sooner than later. Especially K3S and K3OS are really awesome, did a lot with them recently.
    PSP will not go, it will be reimplemented differently. But as of now, its still here and therefore should be supported by Rancher.
    I fully agree on Kubernetes Versions of cloud providers. It's even worse: In Azure, if you create a AKS cluster out of Rancher, everything is fine. It knows it's AKS and it automatically grays out stuff like etcd and controller, because those are "hidden" in AKS. If you create it in Azure and import it, you have big red failures right on your dashboard (that one with the gauges) because it does not detect ist AKS.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      My issue was mostly related to Cloud (I recognize that Rancher is amazing for on-prem). Rancher does not understand Clouds it supports. It is supposed to create a LoadBalancer Ingress that spins up an external LB.
      I'm eagerly waiting RKE2 and K3S in Rancher. I'm surprised that those are not already there. My best guess is that acquisition by SUSE created temporary delays.

    • @tdeutsch
      @tdeutsch 3 года назад +2

      @@DevOpsToolkit I see. Not ingress is what you want, loadbalancer it is :) I give you this point. However, I'm not aware of anything "On Prem" that has loadbalancer built in. It is (or was) a cloud-only thing. I tried MetaLB which works, but never saw it "in the wild". And for myself (for my Homelab) it's useless because I need one IP to which I can forward the traffic to. What I currently plaing around with (and may suit my homelab use case best) is keepalived deployed into the cluster. Because I use k3os now and you really can't so fancy stuff in it directly. But it does upgrade itself together with k3s :-) Like OpenShift/RHCOS does, but in "free" and "lightweight" :D

    • @tdeutsch
      @tdeutsch 3 года назад

      @@DevOpsToolkit And for cloud LB: they are only "included" in the cloud provided Kubernetes clusters, right? If I use AKS, I can have LB. If I build my own cluster on Azure VMS, I need to bring my own K8S-LB. Is this different with DO? Never used DO tbh.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      @@tdeutsch Oh yeah. I was referring only to Cloud. I never saw an implementation of LB Services in self-managed clusters. They might exist, but I haven't used them.
      Nevertheless, the whole purpose of LB Services in k8s is only to configure external LB with the IPs of the nodes and the port. OTher than that, it's the same as NodePort which is probably what you're using.

    • @tdeutsch
      @tdeutsch 3 года назад +1

      @@DevOpsToolkit ok, maybe we speak about different things. my apologies, as English is not my mother tongue. Let me explain what I know:
      Speaking of LB deployed together with or in Kubernetes, there's the service typ loadbalancer used in CloudClusters like AKS etc., giving you the possibility to connect from the internet to the service. aka kind of a public IP.
      For having something similar for on-prem, there's MetaLB. Basically, you give it a range of IP Adresses and he gives them to the services of type "loadbalancer". Similar behavior to Cloud-LB.
      Where I think we have a confusion was the thing I would call a LB "in front of" K8s. The main reason for this is having a single HA endpoint which then get's forwarded to a nodes ingress (Port 80 or 443). For customer setups, we do that usually with a pair of LinuxVMs with HAproy on them and keepalived to give them a single IP. This is something I only use on-prem.