Kubernetes Tutorial: Why Do You Need StatefulSets in Kubernetes?

Поделиться
HTML-код
  • Опубликовано: 4 окт 2024
  • Access the full course here: kodekloud.com/...
    🆓Join our Slack Community for FREE: kode.wiki/Join...
    Before we talk about stateful sets, we must first understand why we need it. Why can’t we just live with Deployments?
    Let’s start from the very basics. So for a minute let’s keep aside everything that we learned so far, such as Deployments, or Kubernetes, or Docker or containers or virtual machines.
    Let’s just start with a simple server. Our good old physical server. And we are tasked to deploy a database server. So we install and setup MySQL on the server and create a database. Our database is now operational. Other applications can now write data to our database.
    To withstand failures we are tasked to deploy a High Availability solution. So we deploy additional servers and install MySQL on those as well. We have a blank database on the new servers.
    So how do we replicate the data from the original database to the new databases on the new servers. Before we get into that,
    So back to our question on how do we replicate the database to the databases in the new server.
    There are different topologies available.
    The most straight forward one being a single master multi slave topology, where all writes come in to the master server and reads can be served by either the master or any of the slaves servers.
    So the master server should be setup first, before deploying the slaves.
    Once the slaves are deployed, perform an initial clone of the database from the master server to the first slave. After the initial copy enable continuous replication from the master to that slave so that the database on the slave node is always in sync with the database on the master.
    Note that both these slaves are configured with the address of the master host. When replication is initialized you point the slave to the master with the master’s hostname or address. That way the slaves know where the master is.
    Let us now go back to the world of Kubernetes and containers and try to deploy this setup.
    In the Kubernetes world, each of these instances, including the master and slaves are a POD part of a deployment.
    In step 1, we want the master to come up first and then the slaves. And in case of the slaves we want slave 1 to come up first, perform initial clone of data from the master, and in step 4, we want slave 2 to come up next and clone data from slave 1.
    With deployments you can’t guarantee that order. With deployments all pods part of the deployment come up at the same time.
    So the first step can’t be implemented with a Deployment.
    As we have seen while working with deployments the pods come up with random names. So that won’t help us here. Even if we decide to designate one of these pods as the master, and use it’s name to configure the slaves, if that POD crashes and the deployment creates a new pod in it’s place, it’s going to come up with a completely new pod name. And now the slaves are pointing to an address that does not exist. And because of all of these, the remaining steps can’t be executed.
    And that’s where stateful sets come into play. Stateful sets are similar to deployment sets, as in they create PODs based on a template. But with some differences. With stateful sets, pods are created in a sequential order. After the first pod is deployed, it must be in a running and ready state before the next pod is deployed.
    So that helps us ensure master is deployed first and then slave 1 and then slave 2. And that helps us with steps 1 and 4.
    Stateful sets assign a unique ordinal index to each POD - a number starting from 0 for the first pod and increments by 1.
    Each Pod gets a name derived from this index, combined with the stateful set name. So the first pod gets mysql-0, the second pod mysql-1 and third mysql-2. SO no more random names. You can rely on these names going forward. We can designate the pod with the name mysql-0 as the master, and any other pods as slaves. Pod mysql-2 knows that it has to perform an initial clone of data from the pod mysql-1. If you scale up by deploying another pod, mysql-3, then it would know that it can perform a clone from mysql-2.
    To enable continuous replication, you can now point the slaves to the master at mysql-0.
    Even if the master fails, and the pod is recreated is created, it would still come up with the same name. Stateful sets maintain a sticky identity for each of their pods. And these help with the remaining steps. The master is now always the master and available at the address mysql-0.
    And that is why you need stateful sets. In the upcoming lectures, we will talk more about creating stateful sets, headless services, persistent volumes, and more.
    Access the full course here: kodekloud.com/...
    #KubernetesTutorial #Kubernetes

Комментарии • 52

  • @KodeKloud
    @KodeKloud  Год назад

    Access the Kubernetes course here: bit.ly/KubernetesLearningPath

  • @neverforgetsamit
    @neverforgetsamit Год назад +1

    Unfortunately I have only one like button. Thanks for the crystal clear explanation.

    • @KodeKloud
      @KodeKloud  Год назад

      Hi, we appreciate the kind comment! enjoy!

  • @ishasharma2809
    @ishasharma2809 Год назад +1

    Beautifully explained

  • @soumyadipchatterjee2267
    @soumyadipchatterjee2267 7 месяцев назад

    Excellent explained Mumshad . You're a gem 💎

  • @BipinJethwani
    @BipinJethwani 4 года назад +2

    Mumshad you are cool. Nice set of videos. Thanks for making it so simple and easy to understand.

  • @sk45861
    @sk45861 11 месяцев назад +1

    nice explanation - but small request, please use night-mode for the video as most developers prefer night mode :)

    • @KodeKloud
      @KodeKloud  11 месяцев назад

      Thanks for the tip

  • @shantanuparanjpe8363
    @shantanuparanjpe8363 3 года назад +1

    Very good explaination...Thanks mumshaad...

    • @KodeKloud
      @KodeKloud  3 года назад

      Glad you enjoyed the video and good to know that it cleared your doubts. Thanks😊
      Please subscribe to the channel and support us 😊

  • @devareddy8817
    @devareddy8817 4 года назад

    sir its understanding but with practical will give me the much understanding

  • @0ffset925
    @0ffset925 2 года назад +1

    wonderful explanation. thank you

  • @amitpawar1677
    @amitpawar1677 3 года назад

    Simply explained sir thank you.....

    • @KodeKloud
      @KodeKloud  10 месяцев назад

      Greetings! Thank you for your kind words. Spread the word by liking, sharing and subscribing to our channel! Cheers :).

  • @f2f4ff6f8f0
    @f2f4ff6f8f0 Год назад +1

    Thank you Mumshad !!

  • @samitjain2414
    @samitjain2414 3 года назад +2

    good explanation. Question - how does K8 preserve statefulness if a stateful pod crashes and is replaced by another pod?

  • @mzimmerman1988
    @mzimmerman1988 4 года назад

    this was very helpful! Thank you.

  • @khalilbn
    @khalilbn 4 года назад

    Great explanation, thank you mumshad

  • @mehdilionel48
    @mehdilionel48 Год назад +1

    good😀 explanation !

  • @ashwani0505
    @ashwani0505 4 года назад +10

    Hi Mumshad,
    Thanks for the great session. As you said in Statefulset if any POD goes down, it comes up with same name I.e. MySQL-0 in your example, how does this newly constructed pod sync the data? It might have lost all the data when it went down, even if it uses persistent storage, it might have missed some new data during the period when it was down??

    • @RadhamaniRamadoss
      @RadhamaniRamadoss 4 года назад +3

      When master goes down,there is no write.So there wont be any need to sync data for master node..

    • @harshgoyal6822
      @harshgoyal6822 4 года назад +2

      Pod will get the same data again because we will use Persistent Volume

    • @haressedMom
      @haressedMom 4 года назад +2

      When the new pod is created the initContainers will ensure that the data is synced over to the new Pod. Data can be written on on mysql-0.

    • @wewarntogether
      @wewarntogether Год назад

      where are the queries stored at that point of time@@haressedMom

  • @AbdelkaderZenasni-c7f
    @AbdelkaderZenasni-c7f Год назад

    verry good tnx

    • @KodeKloud
      @KodeKloud  11 месяцев назад

      You are welcome!

  • @anilkommalapati6248
    @anilkommalapati6248 3 года назад +1

    Hi Sir - Appreciate if you can provide with another simple example if possible. I think this is quite complicate one .

  • @syedsaifulla8961
    @syedsaifulla8961 4 года назад

    Good explaination

  • @jaimeeduardo159
    @jaimeeduardo159 4 года назад

    Great video!

  • @gangadharrao7186
    @gangadharrao7186 4 года назад

    Hello Sir, Nice video. Please share any video's on Helm charts getting started with Kubernetes clusters, , Thanks...

  • @jashvasabbu6479
    @jashvasabbu6479 4 года назад +2

    Hi Mumshad.
    Your explanation is awesome..
    I have one doubt here on how pods get dynamically assigned to IP address.
    which component in K8S allocates the IPs to PODs.
    Kindly answer.

    • @aliakbarhemmati31
      @aliakbarhemmati31 4 года назад

      i think kubeproxy

    • @jashvasabbu6479
      @jashvasabbu6479 4 года назад +1

      @@aliakbarhemmati31 ok thanks for the answer , im also finding the answer for how pods get dynamically assigned to IPs when ever they died .

    • @vickyprabhat
      @vickyprabhat 2 года назад

      Network plugin sir. Calico is an example

  • @DontScareTheFish
    @DontScareTheFish 4 года назад +3

    Your stateful set has different capabilities. The master (mysql-0) in your example is writable where the others aren't.
    As you have differences why not 2 charts. mysql-master with a replica count of 1 and mysql-replica with a replica count that can scale? For the replica's connecting to the master use the service name instead of the hostname?

    • @vickyprabhat
      @vickyprabhat 2 года назад +1

      This is what I thought. We anyway use service to communicate between two pods, why would be need podname?

  • @richardwang3438
    @richardwang3438 4 года назад

    great video!

  • @nestorreveron
    @nestorreveron 4 года назад

    Thanks master!

  • @husseinrefaat2530
    @husseinrefaat2530 3 года назад +1

    thanksssss

  • @deva_2022
    @deva_2022 10 месяцев назад

    This is part of which course cka ? Or which coruse in kk?

    • @KodeKloud
      @KodeKloud  6 месяцев назад

      StatefulSets are indeed included in both the Certified Kubernetes Administrator (CKA) and Certified Kubernetes Application Developer (CKAD) courses.

  • @reardeltoit4644
    @reardeltoit4644 3 года назад

    thank you, i love you. May Allah bless India.

  • @aboubacaralaindioubate6086
    @aboubacaralaindioubate6086 3 года назад +1

    Why Master sand Slaves ? and not ( "Primary pods" and "secondary pods" ? Is this a cultural BIAS ???
    Is it ethic to do so ?

  • @fz-ssaravanan2036
    @fz-ssaravanan2036 4 года назад

    Need your help. I have created "nfs server pod" instead of local nfs installation then created the PV. In PersistentVolume, I have mentioned the NFS server's service name
    like:
    apiVersion: v1
    kind: PersistentVolume
    metadata:
    name: nfs
    spec:
    capacity:
    storage: 10Gi
    accessModes:
    - ReadWriteMany
    nfs:
    server: nfs-server.default.svc.cluster.local
    path: "/
    NOW I want to bind this PersistentVolume in Statefulset VolumeClaimTemplate in GCP.. How do we mention it?
    I gave like this in statefulset
    volumeClaimTemplates:
    - metadata:
    name: data
    spec:
    accessModes: ["ReadWriteOnce"]
    resources:
    requests:
    storage: 10Gi
    the problem is if I gave the above example it will be created a new one, not bound with my PV... I hope you understand what I am trying to say.. else let me know. Thanks in advance

    • @aliakbarhemmati31
      @aliakbarhemmati31 4 года назад

      you should specify storageclassname in pv definition and refer to it using storageclassname in pvc

  • @jignesh144
    @jignesh144 4 года назад +2

    Play at 1.25x speed.

  • @fakdaddy75
    @fakdaddy75 4 года назад +1

    I dont think this is correct. You have master0 as the master. If master0 goes down ..... the replica/slave should become the new master (i.e mysql-1) and master0 should transition to slave.