Service Mesh In Kubernetes Explained

Поделиться
HTML-код
  • Опубликовано: 21 окт 2024

Комментарии • 73

  • @DevOpsToolkit
    @DevOpsToolkit  3 года назад +6

    Are you using a service mesh? If you are, which one did you choose and why?

    • @srgpip
      @srgpip 3 года назад +2

      Great video Viktor. We employ Linkerd, really pleased with it thus far.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      @@srgpip LinkerD is awesome

    • @davidliao7961
      @davidliao7961 2 года назад +1

      Choose linkerd, for its easy to use, lightweight, and performance over Istio. Osm from Microsoft is good but it's still not GA.

  • @ariskaraiskos8079
    @ariskaraiskos8079 Год назад +8

    i was searching for more than a year for a video that simply explains the features of service mesh vs what k8s offers out of the box, this video its simply outstanding!

  • @jnrc
    @jnrc 2 года назад +2

    Thanks Viktor, I have shared this video with my coworkers here in Mexico as we are launching or first Kubernetes platform. For non-english speaking countries your English is really clear and useful. Keep going!

  • @mysticaltech
    @mysticaltech 2 года назад +9

    Victor, you are my Kube teacher, thank you so much! Love your teaching styles and content! 🙏

  • @ankit_adarsh
    @ankit_adarsh 3 года назад +14

    Awesome! Linkerd vs Istio would be great!

  • @ijayeti4332
    @ijayeti4332 7 месяцев назад +1

    The is the best explanation of service mesh I’ve seen

  • @StephaneMoser
    @StephaneMoser 3 года назад +8

    Nice vídeo, you made a great job summarizing this topic. I would love to see the comparison between istio and linkerd

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      Adding "another fight" to my TODO list... :)

  • @wollginator
    @wollginator 3 года назад +15

    Thanks for the overview, Linkerd vs Istio would be a nice and helpful comparison!

  • @sajithvenkit4887
    @sajithvenkit4887 3 года назад +1

    Thank you for yet another great video. Looking forward to see a follow-up session comparing linkerd and istio

  • @kulinskyvs
    @kulinskyvs 3 года назад +6

    Comparing service mesh implementations would be really interesting!

  • @h2_kumar
    @h2_kumar Год назад +1

    Excellent explanation. Nicely done.

  • @Peter1215
    @Peter1215 3 года назад +2

    You put it very well, "Networking is typically more important than majority of the ppl think it is"! I'd argue that value that service mesh brings and actually even Kubernetres itself is "just" encapsulating complex networking concepts and making it work for specific purpose. Also, Linkerd vs Istio vid would be great

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +3

      Yep. Service Mesh did not invent something that did not exist, but it simplified complex operations and made them more hands-off. From that perspective, it is similar to Docker. Docker did not "invent" containers, but made them simpler to create and manage.

  • @sergiocardenas3335
    @sergiocardenas3335 3 года назад +5

    we implemented linkerd, fast and simple; cant be more happy with it. documentation can be improved, though it covers most scenarios. I would suggest istio vs linkerd, and see where the magic always happens.

  • @anshuman2121
    @anshuman2121 3 года назад +1

    Appreciate it. Concepts are cleared. Hope soon one practical tutorial come on it.

  • @kevinyu9934
    @kevinyu9934 3 года назад +2

    BTW, Knative is built on top of Service Mesh, specifically Istios. In addition, it provides a good solution for auto-scaling or scaling-to-zero, which is very handy in many scenarios.

  • @mcgowan7100
    @mcgowan7100 3 года назад +1

    Great Explanation, so clear. another +1 for deep dive or comparison featuring Istio

  • @steshaw6510
    @steshaw6510 3 года назад +1

    Yes, please do compare devices meshes. Particularly Istio and Linkerd but also if there are tangible reasons to consider others.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      It's on my TODO list. Can't guarantee when since the list is growing too fast, but to say that I will certainly get to it as soon as I can.

  • @apoloniaturk9968
    @apoloniaturk9968 2 года назад +1

    Beautiful video. Do you have anything similar that focuses on security protocols that tend to be used, such as SSH, and newer stuff?

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      I hav not used ssh for years now. It's all based on APIs now.

  • @Blkhole02
    @Blkhole02 3 года назад +2

    Not using a proper one at the moment, services and the ALB Ingress controller provided by Amazon do the trick for us, at least for now. However, my previous client's infrastructure was heavily dependent on Istio (running a multi-region / multi-language ad search portal with a pretty complex routing logic), up to the point where 60% of my work day consisted of writing virtual services and routing rules for Istio. I find it to be a bit of a double edged sword - while Istio had far more features than Consul or Linkerd at the time (mid 2019), it lagged behind slightly in terms of speed, at least according to the benchmarks we ran back then. Also, the learning curve and overall complexity was among the highest of any tool that I've worked with. Can't remember the last time a piece of software made me feel that dumb.

  • @andreykaliazin4852
    @andreykaliazin4852 3 года назад +1

    Very, very important subject, thanks for covering!
    Do you by chance have Kuma review in you TODO list? If not, then please do add - it is based on the widely used Envoy proxy and is platform agnostic - can link VMs and Pods just as easily. Would be great to cover it within the GitOps framework of course.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      I didn't have it in my TODO list. Adding it now...
      Thanks for the suggestion.

  • @raghuvardhansaripalli9636
    @raghuvardhansaripalli9636 Год назад +1

    super video sir. I got some idea now. I have a question. Can we use service mesh like istio for a multi cluster communication between two different clouds like Azure and GCP (For example cluster 1 resides in Azure and cluster 2 resides in GCP)

  • @fenarRH
    @fenarRH 3 года назад +2

    It would be beneficial to give how dns (resolving fqdn to ip) works within k8s with single and multicluster, without describing that I think it is little bit vague to see how svc mesh works.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      You're right. Adding it to my TODO list... :)

  • @nathanwalker1424
    @nathanwalker1424 2 месяца назад +1

    Great video.

  • @javisartdesign
    @javisartdesign 3 года назад +1

    nice, thanks for the video

  • @smerlos
    @smerlos 3 года назад +3

    I am implementing linkerD reason? CNCF

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      Yep. My main beef with Istio is that Google does not want to let it go and become one of the projects in the foundation.

  • @mrnobody5763
    @mrnobody5763 3 года назад +1

    ​ @DevOps Toolkit by Viktor Farcic could I ask you what's your camera? You did amazing content from a technical and graphic perspective.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      The camera is Sony ZV1, but that is not the main thing. Lightning is more important. I have two ElGato light in front of me, and LifX RGB light bulbs in the background. If I would have to choose between a light and a camera, I would go with the light. I had similar results with the webcam integrated in my iMac. Last year's models have the best webcams I've seen (much better than Logitech or those integrated into MacBook Pros).

    • @mrnobody5763
      @mrnobody5763 3 года назад +1

      @@DevOpsToolkit interesting. Thank you very much

  • @mdaverde
    @mdaverde Год назад +1

    A particular way I'm not a fan of the cloud native philosophy is that it makes it seem like some practices are "features" when I'm not so sure.
    For example, autoscaling tends to be a baked in feature in many cloud native technologies such as kubernetes clusters and serverless. It seems to be by default a good thing. But why? Autoscaling in some circumstances can be a response to a symptom of underlying problem such as a DDoS attack or an application performance failing or network latency issues. We also are under financial confines. It's not to say that these projects say explicitly you shouldn't spend time root causing the need to autoscale but I do feel like our industry as a whole don't hold these practices in skepticism for too long before making them widespread. Latency-aware load balancing is a service mesh feature that I'm skeptical of.

    • @DevOpsToolkit
      @DevOpsToolkit  Год назад

      I feel that it is the other way around, at least when kubernetes is concerned. Kubernetes itself is very limiting and one can argue that it's not of much use alone. You need to add features on top of it to make it do what you want it to do. Cluster auto scaler is not installed by default, nor is service mesh or even networking itself. You need to choose what you need and add it so, if cluster auto scaler is not something you need, you just do not add it.
      Now, the situation is slightly different when using managed kubernetes like gke, EKS, or AKS. In those cases, services come with a few "add-ons" which are sometimes baked in and, at other times, require us to ops it. For example, EKS does not come with cluster auto scaler. You need to install it or, even better, opt for Karpenter.
      Now, whether any of those is a good thing, depends on your needs. I could argue that a system with variable traffic would be too expensive without scaling apps and scaling apps without scaling nodes would mean that we would have to always run the maximum possible capacity. On the other hand, if you do have relatively stable traffic that rarely fluctuates than yes, scaling is not needed.

    • @mdaverde
      @mdaverde Год назад +1

      @@DevOpsToolkit Thank you for your response. I do think cloud native experts are more sophisticated and nuanced in their decision making when building out internal developer platforms and that's why they're in the position they're in. I just feel for, in a world of shifting left, the application developer teams who want to focus on features also being asked about which "add-ons" they want on the current cloud flavor of kubernetes.

    • @DevOpsToolkit
      @DevOpsToolkit  Год назад +1

      @@mdaverde I agree with that. We cannot make decisions in areas we're not experts in. If there's noone else in a company with the expertize to make sure choices or help out, the solution is often to go with SaaS solutions. For example, Google Cloud Run is amazing and very simple to use by anyone. Google in that case already made the choices and all we have to do is accept them and go with the flow.
      P.S. I used Google Cloud run as an example. Something similar can be said for Azure Container Apps, Fly.io, and many other opinionated solutions.

  • @jjhratm
    @jjhratm 3 года назад +2

    Excelent video!

  • @joebowbeer
    @joebowbeer 3 года назад +1

    A comparison of Istio, Linkerd and AWS App Mesh would be appreciated.

  • @PelenTan
    @PelenTan 3 года назад +1

    I have never really understood "service mesh". When ever I've asked an advocate, the most I've gotten is pointing me to long, dry, articles. Even though they themselves swear by it, they seem to have zero clue what it actually does. Thank you for this. Not saying I fully understand it. But it's clear it was half misnamed. While it does deal with "services", it has zero to do with "meshing". It's a service-networkRouter. Plus-plus. It is just doing what a more classic network router does for electrical traffic. Just does it inside the cloud.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      Service mesh is, at the end of the day, a way to do networking. The major difference is that it is a more dynamic and more hands-off type of approach which is better suited for the dynamic environments we have today.
      I would not say that it does it inside cloud but, rather, that it does it inside Kubernetes. It works just as well on-prem as in public cloud. That being said, on-prem can be considered (private) cloud given that certain reqs are met, so you're right.

    • @andreykaliazin4852
      @andreykaliazin4852 3 года назад +1

      @@DevOpsToolkit And it can go beyond Kubernetes, linking VMs, hosts, Edge devices - pretty much anything which has networking (L4/L7) stack in it. We can think of SM as a re-invented TCP/IP offload engine, which takes the complexity of handling IP traffic away from the application stack. It also does other things as well, like applying policies.
      I'd love to see it also implementing FibreChannel, RDMA and other networking protocols apart from TCP, which is not very efficient from the latency/throughput point of view.

  • @aszecowka
    @aszecowka 3 года назад +1

    It will be interesting to show service mesh overhead in terms of request time (mTLS, smart routing are not for free) and resource consumption (sidecar memory and CPU usage).

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      What would you suggest to compare it with? It cannot be with and without mTLS since that would be unfair. If someone doesn't need mTLS, that someone shouldn't use mTLS. Similar would be for traffic shifting and other service mesh feature. So, I guess that a valid comparison would be, for example, using mTLS (and others) with service mesh and without it. It would show whether a feature like that consumes fewer resources or is faster with service mesh and with something else. Is that what you meant? Or, it is about how much those additional features cost in terms of resource consumption?

    • @aszecowka
      @aszecowka 3 года назад +1

      @@DevOpsToolkit My idea was to compare Istio vs without Istio to see its overhead. In addition to that, for me, it will be interesting to see mTLS alternatives. If you want to have encrypted communication between two services to prevent man-in-the-middle attacks, what other options do you have?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      @@aszecowka I'll probably do a comparison of service meshes soon and I'll add "overhead" as one of the criteria, including the cases of not using service mesh.

    • @bisdak5761
      @bisdak5761 3 года назад +2

      @@aszecowka probably CNI approach and use wireguard? cillium and calico have it i guess

  • @MusheghDavtyan
    @MusheghDavtyan 2 года назад

    I want to implement NGINX service mesh and can't really find the advantages, proc and cons between that tool and linkerD or istio.

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +2

      I haven't used NGINX Service Mesh so I cannot compare it with Istio/LinkerD. The reason I haven't used it lies in the industry support. It has a very low user base and, as a result, hardly any tool that relies on service mesh supports it. For example, if you want to do canary deployments, you will likely pick Argo Rollouts or Flagger. But, neither of those supports NGINX Service Mesh. As a result, you'll have to pick something else, likely developed by NGINX since hardly anyone else supports it. But, NGINX probably does not have something similar to Flagger/ArgoRollouts, so you might have to roll out your own.
      Being part of the cloud-native or Kubernetes ecosystem might be the most important criteria when choosing the tools. It does not matter how good it is if it does not work with (almost) anything.
      The root cause of the issue is that NGINX (the company, now F5), choose to ignore Kubernetes for a long while. That's why most of those who chose to use NGINX Ingress, chose kubernetes.github.io/ingress-nginx/ and not the one developed by F5. Similar can be said for NGINX Service Mesh. It came too late to get any significant adoption. Not only that, but by the time it came, it did not offer any good reason for anyone to switch from LinkerD, Istio, and others.

  • @KingoOoVideos
    @KingoOoVideos Год назад +1

    I don't understand why use service mesh for service discovery in K8s since it has (service) feature for that reason?

    • @DevOpsToolkit
      @DevOpsToolkit  Год назад

      That is correct. Service discovery is baked into kubernetes spec (CNI) but implemented through whichever CNI provider you choose. Service mesh, on the other hand, is aiming towards advanced networking.

    • @KingoOoVideos
      @KingoOoVideos Год назад +1

      @@DevOpsToolkit Thanks Victor I got that but my question can I replace native service discovery with service mesh service discovery?

    • @DevOpsToolkit
      @DevOpsToolkit  Год назад +1

      @@KingoOoVideos Service meshes are using service discovery baked into Kubernetes so the answer is no, you can't replace it directly. That's how most of the things work in Kubernetes. It provides basic building blocks used but higher-level solutions. For example, you might use Knative which serves as application definition. It, in a way, replaces Deployment, (among other things). However, behind the scenes (at lower levels) it ends up creating Pods. Service meshes do the same. You might end up using a resource from a service mesh to route traffic (e.g., VirtualService) but those resources will, one way or another, use Services (basic building block for service discovery).
      There are exceptions though. A good example would be kcp that removed most of Kubernetes resource definitions and is not using some those "basic building blocks", but that is an exception rather than a rule.

    • @KingoOoVideos
      @KingoOoVideos Год назад +1

      @@DevOpsToolkit I think the best way to differentiate between when to use native service discovery vs service mesh is when you have multiple K8s clusters or when you want to connect to traditional app on VM in this case you must use service mesh for service discovery

    • @DevOpsToolkit
      @DevOpsToolkit  Год назад +1

      @@KingoOoVideos Oh yeah. I thoought that you were referring to in-cluster service discovery. When you have multi-cluster discovery, service mesh is definitely the way to go.

  • @andreykaliazin4852
    @andreykaliazin4852 3 года назад +1

    and what is wrong with the F5 LB? Don't want to manage it, but have to live with it, its everywhere, especially now that NGINX is part of it.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      F5 is indeed one of the best if not the best option for on-prem data centers. Nevertheless, NGINX choose to ignore Kubernetes for a long time and that backfired. NGINX Ingress was a dominant one and now it is slowly fading away. Service mesh is almost completely out of their picture. With the majority moving to Cloud, F5 is becoming obsolete as well. Finally, management of something is important and, as you hinted, that's not an easy task with F5.
      Unless a significant change happens, I would not invest my $$$ into F5 (ignoring the fact that I don't have $$$ to invest).

    • @andreykaliazin4852
      @andreykaliazin4852 3 года назад +1

      @@DevOpsToolkit All our clients that I was dealing with have invested in F5 and not going to get rid of it any time soon. Infra teams are usually small and overworked and not going to re-invest time and effort into something that kind of just works out there. :-)
      F5 is embedded with the AD and security perimeters around on-prem and on-prem is also not going away from most financial services, insurance, governmental, health, national security, energy, etc corporate users. In that big country across the pond they still use 5.25" floppy disks in some very important places (facepalm) (do they buy them on eBay? I wonder :-)

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      @@andreykaliazin4852 I understand. That's the reality for many companies, including those I work with. Nevertheless, that does not mean that things are not changing. Financial services are, for example, moving to public Cloud. They are adopting "newer" tools and processes. Truth be told, it's going slow, but that does not mean that it is not happening. Take a look, for example, at HSBC. It's one of the biggest financial institutions in the world, and it is moving to Google cloud. Capital one is championing open source. ABN Amro is going multi-cloud, ING Direct is pioneer amoung progressive banks, etc.
      What I'm trying to say is that you are right when you say that certain industries are slower than others. Nevertheless, they all know that they have to change and, even if only a fraction of their current workloads are "modern", that's where most of their investment is going. All that's needed is competition and, in case of financial services, that competition is FinTech which caused a few of the "big" ones to change, which created a snowball effect towards the others. They will likely never be on the edge, but they are also not accepting status quo either.

    • @andreykaliazin4852
      @andreykaliazin4852 3 года назад +1

      @@DevOpsToolkit Indeed. My rants are caused by the fact that we are called for help not by the industry champions like HSBC, but by those who are lagging behind. So my perspective is correspondingly skewed :-)

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      @@andreykaliazin4852 In most cases, financial institutions and other "traditional" industries are risk averse, mostly because they have some form of monopoly or "guaranteed" market percentage. As such, they do not change unless when forced to. In the financial services industry, that is happening right now (for myriad of reasons). As a result, you can expect "laggers" to be forced to improve, not because they want to, but because their business is forcing them to do that. The alternative is to disappear, which is likely scenario for many. In either case, they will not do the things you mentioned for long, either because they'll change or because they will not exist for long.

  • @JomenoVona
    @JomenoVona 2 года назад +1

    😹😂 And other "...ilities" @ 4:37