GitOps: How To Use _____ (Not YAML) To Manage Kubernetes Resources With GitOps?

Поделиться
HTML-код
  • Опубликовано: 27 ноя 2024

Комментарии • 40

  • @IvanRizzante
    @IvanRizzante 27 дней назад +1

    Thanks for another great video 🎉 generally speaking versioning the generated yaml is a good practice also for comparing the chances that may be introduced by a commit and may be hidden to the eye. ArgoCD make things more complicated because of applicationsets, which according to runtime parameters may render something or something else. In theory you would have to render the yaml for all the possible parameters combinations. argocd-diff-preview aims to fix this by rendering the true desired state and show it in the PRs

  • @KrzysztofLuczakPro
    @KrzysztofLuczakPro 25 дней назад +2

    Interesting video, thank! Could you record another one (or give some tips) on how to use such approach having a Kubernetes as a small platform for hosting different apps - all of them are deployed from a single mono repo. Currently, I'm using Helmfile which makes Helm similar to Terraform (diff and apply). I love Terraform, but for my case it was to slow. I don't use GitOps yet.

    • @DevOpsToolkit
      @DevOpsToolkit  25 дней назад

      I would strongly recommend using GitOps. With it, you'll see that there is no need for Helmfile.

  • @Mvvement
    @Mvvement 27 дней назад +3

    I am doing this same approach now with KCL and it works great! How much do you use KCL options vs file overrides for environment config? I think options generally satisfies my needs but I could see times when you would want to do a one off modification to a resource for an env.

    • @DevOpsToolkit
      @DevOpsToolkit  27 дней назад

      I prefer storing everything in git so i don't use options just i haven't been using --set with helm.

  • @jacekbartyzel2106
    @jacekbartyzel2106 27 дней назад +1

    I am great fan of rendered manifest pattern. Clear visibility of desired state in rendered git repo, cpu cost for generation is moved left from repo server(helm, kustomize cost more to run Argocd manifest generation than plain yaml)

  • @alexandre_des_rois
    @alexandre_des_rois 23 дня назад +1

    Interesting topic. I am a novice at Kubernetes but writing yaml without code completions, LSP and types is a nightmare. Please do yourself a favor, choose your favorite code language (with static code analysis, java, typescript, c#, go, rust, etc...) and use it to produce yaml.
    Well known open source project like pgcn or longhorn provide openapi schema along with their CRDs allowing you to convert it to structured types for any language.
    I will never write pure yaml again.

  • @JamesHounshell
    @JamesHounshell 27 дней назад +1

    I agree the sentiment 100% but I think the tooling isnt quite there yet (but each year I continue to be surprised).
    We use kustomize with helm in our repos and our argo apps are of type kustomize.
    The missing link that we solved was writing a tool that renders the yaml as a PR comment. (Under the hood its just running `argocd app diff` then doing some formatting for github markdown) (This should be familiar to people who use Atlantis with Terraform)
    On the infra team, we're all comfortable doing that locally as we develop but the majority of our SWEs don't want to do that so seeing their argo diff in the PR is great for them.
    But in general if you're using any templating tool this kind of feedback in the PR is crucial.
    I just wish we could have open sourced the tool but it was deemed infeasable by legal.

  • @luizscofield3128
    @luizscofield3128 28 дней назад +2

    Liked the video! I'd like to suggest a video showing how you've been using IaC nowadays, the stack you like to use and how you integrate the tools.

    • @DevOpsToolkit
      @DevOpsToolkit  28 дней назад

      That's probably the only subject I'm trying to avoid. I'm actively working on crossplane and whatever i say about other tools in that domain is likely to be missintreted and i will be labeled as biased.

    • @DevOpsToolkit
      @DevOpsToolkit  28 дней назад

      I can answer any question related to crossplane if that helps.

    • @luizscofield3128
      @luizscofield3128 28 дней назад +1

      @@DevOpsToolkit I get you point, it's ok! Thanks for the response :)

  • @chrisre2751
    @chrisre2751 17 дней назад +1

    Thank you interessting video.
    I have a new question for a further video:
    What should be provided as central functions in a Pipeline-Library and what should be managed in each project?
    Where is the limit that is provided centrally by an IDP team and what a DevOps team should develop itself.
    Should teams be forced to implement different pipeline stages in order to have a standard. For example, dependency checks or SBOMs that are created according to a standard.
    Let me explain the situation:
    In a large company with many teams and many services, there will (very likely) be many "identical" services from the perspective of the CI pipeline. The services may have different business logic, but their architecture and the technologies used are the same. For example, you have many REST services that were written in the same programming language and use the same libraries.
    A CI pipeline for the "same" projects hardly differs. Instead of copying pipeline code from project to project, it can therefore make sense to provide central functions that then only need to be maintained once.
    Is there a rule of thumb for what should be provided centrally and what should not be provided?

    • @DevOpsToolkit
      @DevOpsToolkit  17 дней назад

      That's a good one. I'll do my best to answer it in the short-video format soon (probably after KubeCon).

  • @DryBones111
    @DryBones111 28 дней назад +2

    Thanks for this answer. It doesn't directly answer my question but it did challenge the way I think about the problem and as a result I can see new possible solutions.
    I currently use Argo CD plugins but using git as a pure YAML store makes a lot of sense. Would you recommend making a "state repo" with only bot write access so that all pure YAML is fed in using automatic/pipeline actions?

    • @DevOpsToolkit
      @DevOpsToolkit  28 дней назад +1

      Yes. I would always generate such yaml with workflows (pipelines) and i don't see a reason for anyone to have write access to it.

    • @silopolis-yt
      @silopolis-yt 28 дней назад +1

      @DryBones111 thanks for asking this one.
      @DevOpsToolKit very interesting answer. Sounded a bit strange at first, template and state repo seeming a bit redundant... But on second thought, I like the fact that it creates a static desired state. Seems robust and makes me feel comfortable. Also, those yaml files could be leveraged for system documentation.

  • @ChewieBeardy
    @ChewieBeardy 12 дней назад

    Thanks for this video! A bit of a late question: in this pattern, in which repo do the pre-templated high level manifests live? In the source code app repo or in the gitops repo? both approaches irk me for different reasons:
    * If it's in the source code repo, I feel I'm moving away from 12 factor design, since the app repo knows about where it will be deployed and has the values for each env
    * If it's in the gitops repo, then that repo holds both the pre-template and the post-template resources, and I fear automating a workflow on push will trigger some sort of infinite CI loop
    Or maybe, what we need is a third repo? Something like:
    * One repo for the code source, fully agnostic of where it will be deployed, responsible for building the image and the manifest template (missing the env-specific values)
    * One repo for declaring deployment intents, "instanciating" the template with the wanted values for each env, but still with the high-level tool (e.g. declaring helm values, kcl inputs...)
    * One repo that is bot-only, storing the result of workflows in the second repo with the templated results
    Seems a bit heavy-handed though... Alternatively, that third place could be an OCI registry? But we lose the easy diffs, which is one of the main goal of this pattern...

    • @DevOpsToolkit
      @DevOpsToolkit  12 дней назад

      I would keep it in the source code repo. Whomever is working on that app needs to be able to run it as well (e.g. locally) without going to a separate repo. Whether you keep the values there or somewhere else is a separate question.

    • @ChewieBeardy
      @ChewieBeardy 12 дней назад +1

      @@DevOpsToolkit You're right, the question should be more about where values should be stored, or rather if the "high-level intent" (specifying env-specific values in high-level tools) should live alongside "low-level intent" (the actual manifests that gitops will sync)

    • @DevOpsToolkit
      @DevOpsToolkit  12 дней назад

      @ChewieBeardy i tend to keep values in environment repos (e.g. staging, production, etc.) which are the same repos where i store yaml.

  • @DerJoe92
    @DerJoe92 23 дня назад +1

    So you use pipelines to produce YAML from your templates on "main" and push it to a separate "deploy" branch that is consumed by ArgoCD/Flux?
    Or what would be the best workflow in your opinion?

    • @DevOpsToolkit
      @DevOpsToolkit  23 дня назад +1

      That's correct. Inside the workflows I use to build images and whatever else I might be doing, I generate YAML and push it to the repo.

    • @fkfilip91
      @fkfilip91 14 дней назад +1

      @@DevOpsToolkit In case that you are using helm for managing k8s manifests and in case that you have mono repo for deploying multiple helm charts to the clusters. are you storing then helm chart directly to the main branch (as a pure helm charts) and then do the helm template? Or you are doing something different? How do you know which version of the chart is deployed and with which values?
      Btw, I really like the approach, but I think that I'm missing this key info.

    • @DevOpsToolkit
      @DevOpsToolkit  14 дней назад

      @fkfilip91 if you do helm template everything becomes static data (yaml). All the info is there. It does not even matter where the chart came from or where it's stored since it was used only as means to generate the desired state (yaml).

    • @DerJoe92
      @DerJoe92 14 дней назад +1

      @@fkfilip91 Helm-managed resources usually have labels with some chart metadata. But as Victor already stated, for Kubernetes itself it doesn't really matter anymore

  • @nickmills8476
    @nickmills8476 27 дней назад +1

    some of our helm values files are quite complex, now that kustomize handles helm, we just store our kustomize, yaml, helm etc in git. I don't think I would be comfortable not having the values files in git. The generated yaml is one kustomize build away, so it's easy to check.

    • @DevOpsToolkit
      @DevOpsToolkit  27 дней назад +1

      I am not advocating to not store helm values (for each env) in git. Quite the opposite. I think it's a horrible idea to set values with --set through the cli. One, however, does not exclude the other.

  • @RealYethal
    @RealYethal 28 дней назад +1

    Have you tried generating kubernetes manifests using Nushell?

  • @sergeyp2932
    @sergeyp2932 28 дней назад +2

    It isn't always possible to use pure declarative style, especially with third-party apps, which can rely, for example, on helm hooks for database schema migrations, keypairs generation, etc. And it may take too much effort to fix such apps.

    • @DevOpsToolkit
      @DevOpsToolkit  28 дней назад +1

      That's true. I was referring to our apps rather than third party (and should have said that in the video).

    • @JamesStrachan
      @JamesStrachan 27 дней назад +2

      you can render any third party helm chart to YAML, but you are right, the only real downside is we lose helm hook support. Though for every helm hook there are many available kubernetes operators which work well outside of helm + helm hooks and which are usually better than helm hooks anyway. So I'd still recommend over time moving to a consistent approach for all resources you deploy on kubernetes; render them as YAML as part of your CI whatever format they are released as (yaml/helm/kustomize/timoni). It will help you greatly in understanding & tracking changes and avoid possibly complex issues in production clusters when innocent looking changes to helm values files can have dramatic impact. Though its fine if you render most things to YAML & you have a small number of exceptions; helm charts which you stick to helm in the short term if there are significant use of helm hooks you can't trivially migrate away from

  • @BDnevernind
    @BDnevernind 28 дней назад +12

    I love YAML. I regularly write files hundreds of lines long. I don't even get the objection.

    • @Mvvement
      @Mvvement 27 дней назад +3

      I just find YAML manifests tedious with a lot of boiler plate. You often want labels named with the same thing or want some general best practices defined for each kind, maybe default values or global config. Also yaml doesn't have good type checking, intellisense or validation features. I like my IDE giving me feedback immediately if I formatted something wrong. Maybe personal preference but I also find indentation tricky with yaml especially when working with highly nested objects.

    • @DevOpsToolkit
      @DevOpsToolkit  27 дней назад +1

      @Mvvement that's precisely why I'm advocating to write in some other format (I prefer kcl, but any should do). What I am saying is that you should use whichever tool you choose to output yaml abd store it in git (not to write in yaml).

    • @BDnevernind
      @BDnevernind 27 дней назад +1

      Admittedly the docs aren't great, but I've taught YAML to numerous tech writers who have no programming experience, and they pick it up intuitively. I've heard of KCL and it's neat, but I've never seen it used in the wild. There's something to be said for a simple data formatting markup that's universally supported and widely used, rather than dragging new users to your esoteric/niche language of choice, no matter how technically superior or personally preferential it is in your opinion.
      I tend to agree that indentation sucks in programming and most markup, but in my observation of teaching non-programmers about serialized/nested data, YAML is relatively intuitive vs JSON and others superfluous/awkward syntax. Maybe I prefer it because this accessibility is crucial in my field (DocOps not DevOps). Great exchange though, happy to hear others' preferences.

    • @DevOpsToolkit
      @DevOpsToolkit  27 дней назад +1

      @BDnevernind yaml is the best option when we are not dealing with complex definitions and when there are no variations. If we are talking about tens of lines and there are no variations (e.g. different tags in different environments for the same manifests), I do not think it makes sense to write in anything but yaml. But, if it ends up being hundreds or even thousands of lines and when there are many variations, and when we need conditions and loops, and other "stuff" other related with defining more complex resources, yaml fails miserably (for writing, not reading). In those cases, my advice is to go with whatever is easiest. Kustomize is a great choice when we need to have few variations of the same manifests but not much more. If it is "much more", then we do need a language designed to handle data structures. It could be a generic language as well but they tend to introduce too much boiler plate code.

    • @Mvvement
      @Mvvement 27 дней назад

      @@BDnevernind I am not opposed to yaml as a data structure, but it doesn't do a great job when it comes to maintenance when variations come into play. Helm was created to help with this. It would be a pain to install anything in k8s if you had to configure all of the yaml for every tool you use to meet your needs. Helm has its downsides though because it is just using a templating language. You haven't seen as much CUE or KCL because they are new DSLs, but they both try and solve the shortcomings of helm. There is a place for these tools and I am sure one of them will be very popular in the future.
      I'm not sure if I have ever heard of non-technical people writing k8s manifests. Seems like you would need to know something about k8s to operate effectively. If you are talking about YAML in general for your own configuration then that is a different story. KCL is not a drop in replacement for YAML, it is a language to automate the creation of or validate complex/large yaml.