Full App Lifecycle In Kubernetes With Argo CD, DevSpace, vCluster, k3d, and GitHub Actions

Поделиться
HTML-код
  • Опубликовано: 8 сен 2024

Комментарии • 69

  • @SteveSperandeo
    @SteveSperandeo 3 года назад +7

    Thanks!

  • @powersurge5576
    @powersurge5576 2 месяца назад +1

    Would be nice to see a new episode on app promotion across the environments. A solution that considers post deployment health checks and automatically raises a PR with the required changes for the next in line env. E.g. Test -> Stg -> Prod

    • @DevOpsToolkit
      @DevOpsToolkit  2 месяца назад +1

      Adding it to my to-do list...

    • @powersurge5576
      @powersurge5576 2 месяца назад +1

      @@DevOpsToolkit Thanks Viktor, your Todo list must be miles long :). Not sure if you've got some experience in that already! Don't want to mud the water. I'm still researching on native capabilities of FluxCD, ArgoCD and their adjacent projects like GitOps and Kargo. Will post my findings.

    • @DevOpsToolkit
      @DevOpsToolkit  2 месяца назад +1

      @powersurge5576 my to-do list is indeed massive. That subject is something working on a lot so it'll be mostly about putting it all together.

  • @swagatochatterjee7104
    @swagatochatterjee7104 3 года назад +2

    Man Man Man I love the work that you are doing. I was blown away by the shift left series, all that is left I guess is determining how to set up a k8 based CI system that can gel well with Argo and bam I'll have a platform that I can use for my personal projects which can scale up later on-demand with me just being able to give hobbyist time to this.

  • @amantilulo
    @amantilulo 3 года назад +5

    Your videos are simply amazing and a great source of inspiration. Thank you for sharing. 😁😁

  • @andrepires5251
    @andrepires5251 3 года назад +2

    Awesome video, Viktor. Keep them coming! ;)

  • @chandup
    @chandup Год назад +1

    Combination of pre-commit hooks, gitsign, cosign, devspace, vcluster, devpod, k3s/k3d, VSCode demo locally would be really helpful for developers!!

  • @ashleymail4u
    @ashleymail4u 3 года назад +1

    We need more videos like this

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      Thanks for the encouragement.
      Is there a topic you'd like to suggest for one of the upcoming videos?

  • @srgpip
    @srgpip 3 года назад +1

    Thanks for the video Viktor.

  • @DevOpsToolkit
    @DevOpsToolkit  3 года назад +4

    What is the process you're using for the lifecycle of your applications? Which tools do you prefer?
    IMPORTANT: For reasons I do not comprehend (and Google support could not figure out), RUclips tends to delete comments that contain links. Please exclude from the links to ensure that your comments are not removed.

  • @santosharakere
    @santosharakere Год назад +1

    Excellent demo, thanks.

  • @javisartdesign
    @javisartdesign 3 года назад +1

    Like the combinations of tools. Dunno how can be extrapolated within a company with hundred of developers and projects. It would be more a change of mentality rather than technical limitations. Thanks for share

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      Most of the challenges are NOT technical. It's much easier to change a tool than to convince everyone that tool is useful. It's similar with processes.

  • @kevinyu9934
    @kevinyu9934 3 года назад +1

    very inspirational content!!! thank you so much, man.

  • @sergirosellf
    @sergirosellf 3 года назад +3

    hey Viktor, nice video, I think you missed to put the link to the gist 😜
    Thanks for all your work!

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      Thanks for letting me know. It should be fixed now :)

    • @zamboz01
      @zamboz01 3 года назад +1

      @@DevOpsToolkit Great video, but the gist is now in description. I love how you started with the all tools can change and showed in detail how to start automation on projects.

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      @@zamboz01 Hmmm... When I go to the description of the video (and click the "more" link), the link to the Gist is in the "Additional Info" section (the first one).
      Not sure why it's not visible in your case or what to do to make it visible :(

  • @mohammedragab4966
    @mohammedragab4966 Год назад +1

    Since I use GitLab and there is no direct way to run a CI job when a merge request, also known as a PR, is merged. GitLab CI works on push changes, so I created a job to run on the base branch, which is usually the main branch. This job calls the GitLab merge requests list, filters by merged status using jq to get the IDs. With these IDs, we can then proceed to run cleanup tasks, whether they involve vclusters or namespaces

    • @DevOpsToolkit
      @DevOpsToolkit  Год назад +1

      Yeah. It's a bit more difficult, but doable.

    • @mohammedragab4966
      @mohammedragab4966 Год назад +1

      @@DevOpsToolkit Ya I did it already and it works

  • @valour.se47
    @valour.se47 3 года назад +1

    Great explanation

  • @fer-ri
    @fer-ri 3 года назад +1

    Thanks Viktor

  • @WowWow36284
    @WowWow36284 3 года назад +3

    Assuming it would be a big app with many microservices for the frontend, would you only build the images that have changed in the PR and use the existing ones for others?
    I think it could be quite hard to do.
    Further, how would you expose the endpoint with an ingress / https certs?
    It would be a great solution to E2E all of it, but I see it as a challenge for large systems.

  • @JonasLarsen
    @JonasLarsen 3 года назад +1

    Thanks for great video :-)

  • @AnthonyBurback
    @AnthonyBurback Год назад +1

    3:17 made me laugh hard

  • @jonassteinberg9598
    @jonassteinberg9598 2 года назад +1

    creating a load balancer as part of the workflow probably isn't a good idea because what'll happen is if the destruction of that load balancer is dependent on that PR being merged or closed in any way is that load balancer costs will sky rocket; they are quite expensive. what would be better, not to state the obvious, but just in case anyone was thinking through this, would be to use (preferably) an API gateway in which the PR adds, effectively, a route, or similarly, a service mesh (not a fan really), this way you're not wasting a ton of money on limbo lb's. my two cents.

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      Ingress controller is creating a load balancer. Adding a route to ingress (ingress resource) does not create an LB.

  • @itamarmarom7550
    @itamarmarom7550 2 года назад +1

    Hi Viktor, are you planning on an ArgoCD autopilot video?

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +1

      I am :)
      It's alreadyt on my TODO list but, seeing that there is interest for it, I'll bump it up so that it's closer to the top.

  • @quant-daddy
    @quant-daddy 3 года назад +3

    Great video Viktor. I have a question: do I have to build a new image when I merge the PR branch to production or can I use the image built from the PR branch, because the same image would be built on the same once it is merged?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +8

      You don't have to build a new image when merging a PR into the mainline. As a matter of fact, it should NOT be a new image. That way you know that what you're deploying to production is what was tested in previous environments.
      In that case, I would create a new tag based on the PR image mostly so that it's properly versioned (e.g,. semantic versioning).

  • @boriphuthsaensukphattraka3397
    @boriphuthsaensukphattraka3397 3 года назад +1

    Cool thank you

  • @mateuszszczecinski8241
    @mateuszszczecinski8241 3 года назад +2

    I have some theoretical questions and consideration about dynamic environments and testing based on pull request.
    1. Do you prefer PR/branch per user story (which delivers complete feature) or PR per task(which deliver only part of feature/story)? In latter case there is no sense creating environment for manual/uat testing per pull request (because desired feature which you want to test isn’t complete), but only for automated regression testing . Former approach, in my opinion breaks continuous integration rule of short leaved feature branches - potentially the branch is shared by many developers, and can live more than one, two days, which can cause merged conflicts. Do you agree?
    2. When you merge your PR to main line, it is tested on open PR, but code which was merged from another PR, in meanwile, could breake something . So you should run all runtime tests (functional, performance, tests , etc on production like env.) also after merge. So you duplicate tests. So why not only run unit tests (which are faster) on creating PR, and runtime tests after merge? In that scenario, you really not need on demand environments. Or maybe there is another way to avoid conflicts between pull rquest?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      1. I prefer a feature branch that results in a PR for a minimum deliverable to production. That that is called a story, a task, or something else is not that important. What does matter is that a branch is eventually merged to the mainline and delivered to production. The less time is spent between creating a branch and merging it to the mainline, the better.
      2. A PR should always contain the latest code from the mainline. That means that you should merge the mainline into the PR whenever the mainline changes. Assuming that any change to the PR executes a pipeline that runs the tests and whichever other verifications you might have, it should not matter the order in which PRs are merged to the mainline. That also means that you do not have to run tests after merging to the mainline since you run them after merging the mainline to the PR. As for on-demand environments... If you do not have environments for PRs, you cannot run anything but static tests which would make merging them to the mainline, not a good idea, assuming that merge to the mainline is the signal that the process ends with deployment to prod should start.
      In other words, PRs are where most of the action is happening and when we merge them to the mainline, we're doing that because we are confident that it's ready for production. We can never be 100%, but we should get close it.

  • @mohammedragab4966
    @mohammedragab4966 Год назад +1

    Sounds great , but I think PR environment fit in stuff like frontend application , but if I have a stateless app that connect to db and there is a migration changes related to this PR so you can not use one database for PRs because they will conflicting each others and in that case you might use temporary database that will be removed after PR merge that will end up with a very complex situation, in addition to install stuff like traefik or nginx in the virtual cluster every PR and so on , what do you think ?

    • @DevOpsToolkit
      @DevOpsToolkit  Год назад +1

      Yeah. The second part of the story is how to connect apps running in preview envs to those running on permanent ones like staging or reproduction. I'm working on two videos on that subject. They will be released in the upcoming weeks.

  • @cesarinleyva
    @cesarinleyva 3 года назад +2

    What kind of fighting games do you play on that dual fightstick?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      I rarely play fighting games, mostly because there's no one in my family to play them with :(
      The last one I played is Shadow Dancer (en.wikipedia.org/wiki/Shadow_Dancer) and before that Wonder Boy (en.wikipedia.org/wiki/Wonder_Boy).
      Below those stricks is an arcade machine loaded with 3k arcade games from the 80s and 90s.

  • @jdu1111
    @jdu1111 2 года назад +1

    I'm assuming the "production cluster" you have on here is on some cloud environment that allows github actions runners to connect to it publicly?

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      There is no need for GitHub Actions or any other tool to connect to the production cluster when using Argo CD or Flux. Those are pull-based tools running inside your clusters and the only thing you need in GitHubActions (or any other pipeline tool) is to make changes to the desired state stored in a Git repo.

    • @jdu1111
      @jdu1111 2 года назад +1

      I meant more about the pr-open action using vcluster. That command uses the kubeconfig you set as a secret, but it still needs to access the api somehow. Is your cluster just public in this case?

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад

      @@jdu1111 Yes. In that specific case, the cluster is accessible from GitHub actions. In a real-life scenario, that would not be the production cluster though. Also, I'm using GitHub Actions only as an example. You could accomplish the same result with, let's say, Tekton or Argo Workflows running inside the cluster used for preview environments. In that case, you would not need to enable access to it since the source (Tekton/Argo) is the same as the destination (it's running in the same cluster).

    • @jdu1111
      @jdu1111 2 года назад +1

      @@DevOpsToolkit I figured! Thanks for all the videos!

  • @overseer_grimal
    @overseer_grimal 3 года назад +1

    Have you tried the Kubernetes platform Deckhouse? Would like to hear your opinion

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад +1

      I haven't tried it (yet). Adding it to my TODO list...

  • @marcothernandez
    @marcothernandez 2 года назад +1

    What would you do when applications are not to be tested in isolation? e.g. the changed service in the Pull Request has a dependency with many other services. Would you deploy them all in the preview environment?

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +1

      You can always configure your app (or apps) in preview envs to talk to other apps in other environments (e..g staging, production, etc.).

  • @kostyazgara3481
    @kostyazgara3481 Год назад +1

    Hi Viktor, thanks for great video! But I’m interesting in what if I want to use my mainline branch to deploy changes to staging first and promote changes to production only via tag? I’m using Argocd and keep all manifests including app argocd custom resource inside the app repo, and I believe to deploy changes to staging it’s enough to set target revision on Argo app to HEAD, but what I should do when I set tag and my production argo app lives in another cluster and should point only to specific tag? Since argo can monitor only specific tag, I should tell him to what new tag he should sync changes. I see several approaches, one of them get direct access to production argo server and manually set target tag in my CD, but this approach violates gitops principles and I should provide access to cluster to my pipeline. The second way is create another repository to hold Argo app manifest there and new Argo app which will sync changes from separate repository and push new commit from my app repo to change target tag in Argo app. What do you think? Both solutions are not good enough from my perspective, but perhaps you have another recipe how to adopt trunkbased with argocd and releasing from tag? Thanks!

    • @DevOpsToolkit
      @DevOpsToolkit  Год назад

      I always use specific (container) tags in Argo CD apps (together with env. specific variables). That way, I know exactly what are env. specific properties associated with each app in each environment. I'm not sure why you think that's not "good enough".

    • @kostyazgara3481
      @kostyazgara3481 Год назад +1

      @@DevOpsToolkit not good enough, I mean I don’t like an idea with accessing cluster directly and I don’t like having extra repositories.
      If you use a tag for Argo target revision then how do you update that tag? If I just change a tag in my app manifest and push to the git, Argo cd will not know about this change cuz it still syncing to the previous tag. We should have some way to tell Argo app to use another target revision and I’m interesting in how exactly this can be done?

    • @DevOpsToolkit
      @DevOpsToolkit  Год назад

      @@kostyazgara3481I think we missunderstood each other. I am not suggesting anyone ever accessing any cluster (except in break-glass scenarios). I was referring to image tags defined in Argo CD App manifests stored in Git.
      As for "extra" repositories... That's up to you. You can have many repos, one repo, or anything in between. When you define an Argo CD App, you have to specify a repo and a directory so you can organize "stuff" within directories as well.
      > If you use a tag for Argo target revision then how do you update that tag?
      You can update the manifest using pipelines (e.g., Jenkins, GitHub Actions, Argo Workflows) or you can use more specialized tools like Akuity Kargo. In any case, the point is that, one way or another, you modify the desired state in Git.
      > If I just change a tag in my app manifest and push to the git, Argo cd will not know about this change cuz it still syncing to the previous tag.
      I'm not sure why you would use Git tags for that (you have image tags that matter and the mainline should represent production). Nevertheless, if you do use Git tag, you need to reference those tag in App manifests and that means that you modify the manifest and push it back to Git. Remember, GitOps is all about defining the desired state. If you don't want to have the desired state in Git, GitOps is not a good option.
      All in all, change Argo CD manifests and push them to Git. Do that through pipelines that you use for other types of tasks or through specialized tools that monitor contianer image registries. The end result must be the desired state stored in Git.

    • @kostyazgara3481
      @kostyazgara3481 Год назад

      @@DevOpsToolkit I’d like to set target revision to git tag, cuz not all changes are present in new image tag. Let’s say I have added new lib which requires new env variable. To inject new env var obviously I need to modify deployment manifest, but this change is not present in new image, right? And this change should be applied by argo. But okay, seems I got you, the only true gitops way is somehow change argo app manifest. And this “somehow” becomes a tricky point to me, but hope I will find a best solution that will fit my needs. Thanks again!

  • @geowalrus
    @geowalrus 3 года назад +1

    if i use Argo Workflows instead of GitHub Actions, how would be the integration of Argo Workflows with git repository(say bitbucket) for PR approval process ?

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      Normally, you'd use Argo Events to notify Argo Workflows (or something else) whenever specific events occur in Git (or anywhere else). In that context, think of Argo Events as the entity that a) creates Git webhooks and b) figures out where to redirect the requests coming from those webhooks. Argo Events is much more than that but, within the context of your question, that's what it does.
      Is that what you meant or I misunderstood your question?

  • @1879heikkisorsa
    @1879heikkisorsa 2 года назад +1

    Very nice setup! Do you have any hints on how it would work in a microservice architecture so that every PR not only deploys the single application but also the rest of the microservices it uses? Also, instead adding commits programmatically for deploying on production you can extract the kustomize part into a second repo containing all infrastructure part and just udpate that on each prod release. This way your main repo stays clean, you still have all versions in git and pull this infra repo from ArgoCD!

    • @DevOpsToolkit
      @DevOpsToolkit  2 года назад +1

      Unless you have only a few small apps, I do not think it's a good idea to deploy everything for every PR. That would cost too much (use too much compute). Nevertheless, if that's what you need to do, I would create a virtual cluster (e.g., vCluster) or a Namespace, fetch manifests that describe production, modify the tag of the app in question, and deploy them all, directly or by pushing changes to Git.
      In my case, even though for simplicity reasons that's not how I organized the demo, production env repo contains only the references to manifests and env-specific overwrites. The base manifests are in repos of applications. I think that's similar, if not the same, as what you're describing when you said that the
      main repo should stay clean.

    • @Prashant-tk8re
      @Prashant-tk8re Год назад +1

      @@DevOpsToolkit how else do you recommend creating a staging environment for a microservices architecture where you need to test the changes for 1 service?

    • @DevOpsToolkit
      @DevOpsToolkit  Год назад

      Deploy that one microservice and connect it to other apps running in production or staging.

    • @Prashant-tk8re
      @Prashant-tk8re Год назад

      @@DevOpsToolkit for that case, wouldn't you have to ensure that the other staging apps are not running some modified code? I mean for testing my service, I should be sure that the other services are running the latest release branch and not some random branch which is being developed.
      Maybe one would be using different namespaces for this, right? So a namespace with all the apps mimicking production apps and another namespace where we deploy testing apps. Then testing apps interact with the services in "mimicking" namespace

  • @rundeks
    @rundeks 2 года назад +1

    Is DevSpace still your recommendation for Local Kubernetes development?

  • @jonassteinberg9598
    @jonassteinberg9598 2 года назад +1

    the cutaways to scary adults and children are terrifying fyi.

  • @josephcasey7479
    @josephcasey7479 3 года назад +1

    Thanks!

    • @DevOpsToolkit
      @DevOpsToolkit  3 года назад

      Thanks a ton, Joseph.
      Contributions like that one are what keep the channel going and the expenses tolerable.