Keynote: How Spotify Accidentally Deleted All its Kube Clusters with No User Impact - David Xia

Поделиться
HTML-код
  • Опубликовано: 27 июл 2024
  • Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
    Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
    Keynote: How Spotify Accidentally Deleted All its Kube Clusters with No User Impact - David Xia, Infrastructure Engineer, Spotify
    During Spotify's Kubernetes migration, David's team deleted most of their production Kubernetes clusters. Accidentally. Twice. With little to no user impact. David shares how they recovered and learned to operate many clusters automatically and safely.
    In 2017, Spotify planned the migration of hundreds of teams, thousands of services, and tens of thousands of hosts to Google Kubernetes Engine (GKE). In the last half of 2018, Spotify migrated 50 teams and hundreds of services, including critical ones, onto multiple production clusters.
    David describes what led to the cluster deletions and how they barely affected users. Since the postmortem, Spotify has minimized downtime and human error by declaratively defining clusters in code with Terraform, backing up and restoring clusters with Ark, and increasing scalability and availability by running many more clusters.
    sched.co/MQbb
  • НаукаНаука

Комментарии • 16

  • @MrRidwanbejoz
    @MrRidwanbejoz Год назад

    I believe Spotify is company that appreciate the engineering team so much. Culture of Learning is new thing.

  • @sreejitkar7981
    @sreejitkar7981 Год назад

    This is like one of those water cooler conversations you get to have from that seasoned architect at your work who has had enough interesting mistakes ! Also I feel isolating your bundles with your infra can actually help avoid these errs

  • @unfathomablej
    @unfathomablej 5 лет назад +6

    This is super entertaining. Sorry you guys had to deal with a mangled tfstate file in production. It's a terrible rite of passage.

  • @anandkrishna6687
    @anandkrishna6687 4 года назад +1

    loved it, honest , great learning

  • @manipal2011
    @manipal2011 5 лет назад +3

    2 Teams - Kubernetes [Cluster Operators and Cluster Users]

  • @bojackhorsingaround
    @bojackhorsingaround 2 года назад +2

    Wonderful test case even for a beginner like me. Good talk!

  • @emmadoyle4157
    @emmadoyle4157 2 года назад +3

    If the internal slack channel was "eerily quiet" it's probably because teams don't have enough alerting set up to notify them that their applications/services aren't running in production.

  • @mj-np9sy
    @mj-np9sy 5 лет назад +4

    Go through your envs and protect those clusters from deletion now that you can!

    • @fredow123456
      @fredow123456 5 лет назад +1

      let's do it before bad things happen 😂

  • @jasonquek8279
    @jasonquek8279 5 лет назад +6

    Ouch this was really painful. I guess you were running -auto-approve or no manual review of the tf plan before application.

    • @sachinkadam4742
      @sachinkadam4742 2 года назад

      ya, Guess so...quite an ignorant and not recommended approach for Prod.

  • @dm5665
    @dm5665 2 года назад

    yes many times deleted k8s cluster accidentally...

  • @eusfrasiuspatrickmarshall
    @eusfrasiuspatrickmarshall 4 года назад +1

    :clappepe:

    • @rayoroderik
      @rayoroderik 4 года назад

      Patrick Marshall :pepego: