[ Kube 25 ] Running Jenkins in Kubernetes Cluster using Helm

Поделиться
HTML-код
  • Опубликовано: 22 окт 2024

Комментарии • 234

  • @rammohan4056
    @rammohan4056 3 года назад +1

    You are A master teacher. Driving many slave learners

  • @user-ano-x5c
    @user-ano-x5c 4 года назад +3

    Am learning so much from your series, keep it up and many thanks

  • @bhalchandramekewar6015
    @bhalchandramekewar6015 4 года назад +1

    Hi Venkant,
    Thanks for creating these wonderful learning sessions. Its quite interesting and deep learning with kubernetes bits and pieces. Its really helpful for CKAD as well.

    • @justmeandopensource
      @justmeandopensource  4 года назад +3

      HI Bhalchandra, you are welcome and thanks for following my series.

  • @ShreyasDangetechie
    @ShreyasDangetechie 4 года назад +2

    Thanks a lot for the video very well explained .. have a question though .. what is NFS and why do we need it here as a pod + mounting volume etc ..
    Please get back on this ..
    thanks ..!!

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Thanks for watching. Have a read about what NFS is from this wiki.
      en.wikipedia.org/wiki/Network_File_System
      Just a place to store data and share it to everyone.
      I used NFS here to persist the data for Jenkins. Cheers..

  • @susheelkumarv
    @susheelkumarv 5 лет назад +1

    Hi Venkat, Nice video. The setup works flawlessly. Now that I have Jenkins up and running, as a next step, I wanted to configure a CI/CD pipeline for my Microservices with Jenkins and Helm on Kubernetes. Would be very helpful if you can make a video on this topic to complement the tutorial series. Thanks.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Susheel, thanks for watching this video.
      May I know exactly what your workflow in Jenkins pipeline is? Just give me some direction on this is what you want to do and I will put it together in a video.
      Thanks

    • @susheelkumarv
      @susheelkumarv 5 лет назад +1

      I have a bunch of backend microservices (all REST-based) created using Spring Boot each connected to its own mongoDB database. I need to setup CI/CD pipelines for these microservices to automate the build/deployment process. Target is to run these services and databases in Kubernetes.
      The Jenkins pipeline workflow should look similar to this:
      Version Control --> Build --> Unit test --> Build Docker image --> Push image to private Docker Registry --> Deploy to staging K8S cluster (using Helm) --> Measure + Validate --> Deploy to Prod K8S cluster (using Helm).
      It would be very helpful if you can showcase this in one of your next videos. Thanks.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@susheelkumarv No problem. I will look into this. Will have to do some learnings as I am not a pure devop person.

  • @shrikanyaghatak
    @shrikanyaghatak 4 года назад +2

    Can you please do a short video on the use-cases of using Helm? I really love the videos that you do. Thanks for sharing!!

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Shrikanya, thanks for watching. I will see if I can do it.

  • @milindchavan007
    @milindchavan007 5 лет назад +2

    Hello, this is awesome information about running Jenkins in kubernetes cluster. I really appreciate the way you explain the difficult concepts.
    I request you to please provide deployment of an PHP application using Jenkins with building docker image and deployment to nodes. I will be very glad if you provide coz on internet there people saying use docker in docker but not fruitful information. Could you please small effort to clear this concept.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Milind, thanks for watching this video.
      May I know exactly what you want to do please? Or if you could direct me to an example workflow or steps, I can try it in my environment and make a video of it.
      Thanks.

    • @milindchavan007
      @milindchavan007 5 лет назад +1

      Thanks for your consideration of my request. I actually want to use Jenkins inside my k8s cluster which will use dockerfile to build the image and deploy. Basically I want to use docker in docker concept.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@milindchavan007 No problem. I will think of a simple example and have a play with it and then make a video. Thanks.

    • @milindchavan007
      @milindchavan007 5 лет назад +1

      Thanks man!

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@milindchavan007 You are welcome. Cheers.

  • @InterviewDOT
    @InterviewDOT 4 года назад +1

    Thank you so much, very nice presentation 👌

  • @workingtraveller7897
    @workingtraveller7897 4 года назад +1

    Thanks Venkant , really helpful.
    Please assist how to create a docker image or use docker from this Jenkins without creating any other VM of docker host.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Sachin, thanks for watching. You can use an existing jenkins image and build your own on top it.

  • @frannelk
    @frannelk 5 лет назад +4

    Well explained, I had the chance to learn something new and very useful, cheers :-)

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Thanks for watching this video. Cheers

    • @atulbarge5808
      @atulbarge5808 4 года назад +1

      @@justmeandopensource how to restart kubernets pods please let me know

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      @@atulbarge5808 There is no way to restart any pods. You just delete the pod "kubectl delete " and the cluster will re-launch it.

  • @jhansimadhavi352
    @jhansimadhavi352 4 года назад +2

    hello, glad to found you. very nice explanation. would like to know more real time scenarios in kuberenetes. what tasks actually they do in project environment like managing cluster, taking backups, checking the logs, and if master fails how can we proceed furthur... etc. apprecialte your help. thanks in advance.

    • @justmeandopensource
      @justmeandopensource  4 года назад +3

      Hi Jhansi, thanks for your interest in this channel. I personally don't manage or have experience managing any production clusters. If you go with managed Kubernetes service in the cloud, then you don't have to worry about masters failing. It will all be taken care for you. If you manage the cluster yourself, then you will have to deploy a HA solution with multiple controller nodes and etcd nodes. It would be a lot easier to use automation tools like Kubespray/KOPS to provision and manage clusters yourself. There are different ways to backup the cluster. One that I have used is Velero and have done a video on it. For logs monitoring I have done a video on EFK stack in kubernetes.
      Cheers.

    • @jhansimadhavi352
      @jhansimadhavi352 4 года назад +2

      Just me and Opensource thank you so much for your response.. 😊Keep up doing great work👏👏

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@jhansimadhavi352 Will do. Cheers.

  • @thesivaz
    @thesivaz 5 лет назад +1

    Nice video to get started quickly...👍👍👍👍👍

  • @suhassureshmalusare9688
    @suhassureshmalusare9688 4 года назад +1

    Thank you, well explained and demonstrated. Will you please demonstrate how do we retain workspace once pod is deleted?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Suhas, thanks for watching. I think you will have to archive the artifacts you are interested which will be stored in Jenkins master. Or you can use a shared volume where all jenkins slave pods use as the workspace directory. So when the pod gets deleted, you still have access to the workspace that you can mount from any other pod. Cheers.

    • @suhassureshmalusare9688
      @suhassureshmalusare9688 4 года назад +1

      @@justmeandopensource , Thank you for your reply, when I mapped the Host volume inside pod to store Slave workspace the files were visible inside POD and not on the host (due to permission issue, it has adirty fix). Anyway I found out two ways to address this issue. 1. Zip and upload workspace to S3 bucket or 2. Zip and copy to Host using bind volume.
      Warm regards,
      Suhas

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@suhassureshmalusare9688 Glad that you figured it out. Cheers.

  • @udayalwala3859
    @udayalwala3859 3 года назад +1

    Hi venkat, your videos are awsome..i have a query.. can we use slaves which are installed on the vms and then jenkins is installed using helm.if you have slack channel let me know please?

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Hi Uday, thanks for watching. You can configure any machine to be a Jenkins slave agent provided they have network access to Jenkins master. Link to Slack workspace is in the channel banner. I won't be always available in Slack but there are like minded people who might respond to queries. Cheers.

  • @mikebabs5505
    @mikebabs5505 5 лет назад +3

    Hey great videos. Do you plan to do on ingress controllers. Thanks for all your contributions

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Mike,
      Thanks for watching my videos.
      I had already recorded videos for ingress controllers and are waiting to be released in the coming weeks. As you probably know I release one video per week on Mondays.
      Below is the schedule and you are the first one to know.
      Mar 11: Set up Kubernetes Cluster using LXC containers
      Mar 18: Set up Nginx Ingress in Kubernetes Bare Metal
      Mar 25: Set up Traefik Ingress in Kubernetes Bare Metal
      Apr 1: Set up MetalLB Load Balancing for Bare Metal Kubernetes Cluster
      Thanks,
      Venkat

    • @mikebabs5505
      @mikebabs5505 5 лет назад +1

      Thanks. I’m Learning so much from you.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Cool. Thats what keeping me motivated. Thanks

    • @kasimshaik
      @kasimshaik 5 лет назад +1

      @@justmeandopensource Hi Venkat, How you trying to show Nginx Ingress controller video ? Using Helm or manually creating Ingress using default-backend deployment & service and then Nginx deployment & service ?

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Manually deploying the resources into the cluster. Although it can be deployed as a helm chart, it is very simple to deploy the controller. Thanks.

  • @poojavijayaraj5417
    @poojavijayaraj5417 5 лет назад +3

    Hi Venkat, could you tell what is the configuration of node in the cluster i.e cpu, memory etc? I have deployed a Kubernetes cluster on GCP and each of my node had 1vCPU and 3.75 gigs and when I install jenkins through helm in the cluster, the pod is in pending state. how much resources should each node have for jenkins to run?

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Pooja, thanks for watching this video. My environment consists of worker nodes all of which have 2 CPUs and 2GB RAM.
      You can look at kubectl describe deploy to see exactly why it is in pending state. Also take a look at the values.yaml file to see if you can adjust the resource limits and requests.
      Thanks.

  • @snake8801
    @snake8801 4 года назад +3

    Hi Venkant, can you review this with the latest version of helm jenkins values? There are lots of differences? It would be great if you can share the running values file that would be really helpful.
    Thanks
    Sam

  • @rajeshg.n4561
    @rajeshg.n4561 5 лет назад +1

    Hi Venkat, could you please present a video of installing jenkins in GKE cluster and Spinakker in another GKE cluster and the application to be running on other cluster? The strategy is CI(Jenkins) in one GKE cluster and CD(Spinakker) in another GKE cluster and the application to be up and running in another GKE cluster

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Rajesh, thanks for watching this video. At the moment, I am focusing on Kubernetes and where possible on the AWS cloud. I have Spinnaker on my list. Let me see if I can get that done. Will have a think about it. Thanks.

  • @Kk-rl7nv
    @Kk-rl7nv 8 месяцев назад +1

    Hi Venkat,
    can we make a highly available Jenkins Master via a stateful set with 3 replicas and keep 3 pv and 3 pvc for each separately either in local nfs or in any cloud provider for each to pass as per stateful set in ordinal order to maintain data consistency if yes can you able to make a video on that?

    • @justmeandopensource
      @justmeandopensource  8 месяцев назад

      I tried something similar a while ago. Jenkins by itself doesn’t support cluster architecture. You could only connect to one of the instances in the statefulset. If we used shared disk for all the instances, then it might be possible. But that is not how a cluster should be setup. I’ll see if there are any solutions.

  • @kasimshaik
    @kasimshaik 5 лет назад +1

    Hi Venkat, I could remember you created a video on spinnaker , I do not see it now. Did you remove it from the list ?

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Kasim, Thanks for watching this video. Spinnaker is on my list but yet to do a video on it. Will definitely do on. Thanks.

  • @MsBrati
    @MsBrati 4 года назад +2

    It's a well explained one, can you please tell instead of NFS , what are the parameters to be used for AWS EBS?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Brati, thanks for watching this video. Do you mean EFS? EBS are block storage and are attached as volumes. EFS is the file storage that can be mounted as NFS shares. Do you want to deploy dynamic nfs provisioning in your cluster with AWS EFS and not a nfs server of your own?
      If thats your requirement, then create an EFS filesystem in AWS. You will get the DNS name of the EFS filesystem and the mount target id, which you can use in your dynamic-nfs-provisioning manifests.
      The below article might give you some direction.
      medium.com/@while1eq1/using-amazon-efs-in-a-multiaz-kubernetes-setup-57922e032776
      Thanks

    • @MsBrati
      @MsBrati 4 года назад +1

      @@justmeandopensource Thanks for the reply. Can I use EBS as dynamic provisioning instead of NFS?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@MsBrati Yes you can.
      kubernetes.io/docs/concepts/storage/storage-classes/

    • @MsBrati
      @MsBrati 4 года назад +1

      @@justmeandopensource Thanks , one more query, say if I build n no of jobs, so n no of slaves will get created, and will then terminate after the build?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@MsBrati It depends on your Kubernetes plugin configuration. You can define the number of executors in your slave pod definition, if you want to run more than one job in the same slave pod. But in general its best to leave the number of executors at 1. Since the slave pods are just docker containers, it will be easy to launch and terminate them as needed. There is also another setting in the plugin about idle timeout. When you set it to 5 minutes. If no new jobs are triggered and the slave is idle for 5 minutes, it will be terminated. Yes when you set the executors to 1, slave pods will get created for each job and then get terminated after the idle timeout. Cheers.

  • @nagachaitanyab3363
    @nagachaitanyab3363 3 года назад

    Hi anna
    Thanks for the video.
    I have jenkins setup running on k8.
    What if I want to add users or credentials in values.yaml file and helm upgrade..instead of doing it in the jenkins console.
    How do I do that?

  • @kartikmoolya6994
    @kartikmoolya6994 5 лет назад +2

    Can you tell me why did we need a PV for this, what is the significance of it for jenkins deployment? I think I know the answer, but want to hear it from you

    • @justmeandopensource
      @justmeandopensource  5 лет назад +3

      Hi Kartik, thanks for watching this video. Glad that you know the answer yourself. For the benefit of others going through these comments, here is the explanation. I believe I also mentioned it in this video.
      Kubernetes is a container orchestration platform where you specify your application in a declarative manner. You define your application and what it needs. Kubernetes will take care of launching the container and maintaining it. If the container crashes or the node where it runs crashes, Kubernetes will take care of re-scheduling the container on a different node. It always makes sure that your application is running. So that's container orchestration.
      Jenkins needs a place to store its configuration data and other build related files. Jenkins data directory can be configured to any location you want. Imagine if you didn't create a persistent volume. When the container or node crashes, the jenkins pod will get re-created and won't have any data and you will have to go through the plugins installation and jobs configuration. If you created a persistent volume, kubernetes will attach the volume to the pod whenever it creates it. So you data is persisted.
      That's a very lengthy explanation. Hopefully it answers your question.
      Thanks.

    • @kartikmoolya6994
      @kartikmoolya6994 5 лет назад +1

      Bingo.. I’m new to kubernetes.. had this in back of my mind tho.. so every newly created pod of Jenkins will have the jobs created by older pods

    • @justmeandopensource
      @justmeandopensource  5 лет назад +4

      @@kartikmoolya6994 Normally you will have only one replica of Jenkins running in your Kubernetes deployment. Having more than one replica won't make sense for high availability. For out of the box high availability, you have to use CloudBees Jenkins enterprise. As long as you are using Kubernetes you don't have to worry about high availability, may be there is very brief down time when the pod is being re-created when it crashes.
      But the new pod that gets created will mount the same persistent volume and you will have all your configurations and jobs restored.
      Thanks.

  • @nishgupta29
    @nishgupta29 4 года назад +1

    Hey venkat, thanks for the video. I am provisioning jenkins agents dynamically using helm but we want to cache maven dependencies on the agents so build process doesn’t download dependencies every time. Is there a way to achieve this ?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Nishant, thanks for watching. For your use case, please create your own container using the existing jenkins slave container and add stuff you want.

  • @sarfarazshaikh
    @sarfarazshaikh 2 года назад

    Hi Sir,
    Can you please create a video on Apache kafka on Kubernetes. I am facing the issue in exposing the service over the nodeport and loadbalancer in minikube.

  • @cactusfamily3335
    @cactusfamily3335 5 лет назад +1

    Hi , I really like your tutorial. I am wondering how I can running Jenkins in GKE environment using Helm ? Do I need to create persistent volume and claim everything else should be same ?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      Hi Cactus, thanks for watching this video. The steps are fairly similar. Using helm to deploy any application should be the same no matter where your k8s cluster is running. When it comes to dynamic volume/storage provisioning, as I am using bare metal, I have dynamic nfs provisioning setup. In case of cloud, you can use gcp persistent disk. Thanks.

    • @cactusfamily3335
      @cactusfamily3335 5 лет назад +1

      @@justmeandopensource Thanks I will try it today

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@cactusfamily3335 you are welcome.

  • @stilianstoilov3728
    @stilianstoilov3728 4 года назад +1

    Hi Venkat, one Vim question - What is the key combination to delete the lines at 13:35?

    • @justmeandopensource
      @justmeandopensource  4 года назад +3

      Hi Stilian, thanks for watching. Its a combination of "marks" and "deletetions".
      ma - to mark the beginning and then d'a to delete from the beginning of the mark to the current line.

    • @justmeandopensource
      @justmeandopensource  4 года назад +3

      a in ma is just a marker. You can use mb ... d'b and so on.
      ma .... y'a - to copy (yank) lines from the beginning of the mark to current line.

  • @subinaynag
    @subinaynag 3 года назад

    Well explained!! How can we improve the performance of slave pod creation? If I want to spin up that pod bit more fast. In our setup it’s taking 5 to 10mins to make the slave up and running.
    I’m watching this video after 2years of creation hope to get your reply :)

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Hi Subinay, thanks for watching. Think about what could affect the time it takes to spin up a pod. If the container image is not present on the machine, it will download from the container registry. So it will depend on the network speed. Then the size of the image. Smaller image quicker to download and launch. So there is nothing more you can do to speed up the process.

    • @subinaynag
      @subinaynag 3 года назад

      @@justmeandopensource Thank you

    • @justmeandopensource
      @justmeandopensource  3 года назад

      @@subinaynag No worries. You are welcome.

  • @brunetjulien7492
    @brunetjulien7492 4 года назад +1

    Great work , it's will help me

  • @snake8801
    @snake8801 4 года назад +1

    Hi Venkant, How do you deploy in kubernetes? through Jenkins Pipelines? do you use helm with in Jenkins pipelines?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Sam, thanks for watching. I did one video on Ci/CD in Jenkins in Kubernetes. I used the Kubernetes CD Jenkins plugin to deploy resources in Kubernetes. Cheers.

  • @jeffrichaos
    @jeffrichaos 5 лет назад +1

    Hi Venkat, can you create a tutorial on how to create a Jenkins pipeline using docker on the slave agents using this setup? I am trying to create a pipeline with docker commands but docker is not found on the slave agents even adding docker on Global tool configuration.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      Hi Jeffri, thanks for watching this video. I will have to play with Jenkins pipeline for a while and understand it before I can make a video. I will surely do one. Docker not found in the slave agents. The slave agent are themselves a container and not sure if we will be able to run docker. Anyways I will see how it works. Thanks.

  • @prakasha5870
    @prakasha5870 3 года назад

    Nice video, where is your jenkins-values.yml file . I cant able to find the repo?

  • @cooljai565656
    @cooljai565656 4 года назад +2

    Hi thanks for the awesome tutorials. I just want to ask one ques. i tried to deploy jenkins on azure kubernetes cluster with replica-set one. After deployment, i scaled jenkins to replicas two. But i got an error "Unable to mount volumes for pod "jenkins-: timeout expired waiting for volumes to attach or mount for pod"
    Creating two replicas will create two pv or a single pv for both the pods ?
    and whats the difference between jenkins master & slave and running jenkins with multiple replicas set ??

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Abhinav, thanks for watching.
      I guess you are very new to this jenkins world as you are asking about the difference between jenkins master and jenkins slave. Both server different purpose. I would advise you to do some fundamental learning about Jenkins concepts before diving into this.
      Why do you need two replicas for Jenkins?

    • @cooljai565656
      @cooljai565656 4 года назад +1

      @@justmeandopensource Yes I am new in this.
      I was just trying to scale the Jenkins server so it can distribute the load on two Jenkins pod.

    • @justmeandopensource
      @justmeandopensource  4 года назад +3

      @@cooljai565656 You won't be able to do it that way. If you want high availability then you have to go with CloudBees enterprise version of Jenkins. If you just scaled the replica to 2 instances, then it won't work. You basically want to setup a single persistent volume that is used by both these pods. Even in this case, changes you make through one pod won't be visible if you go through the other pod. When files in the filesystem changes, you need to restart/reload jenkins. It simply doesn't work this way.

    • @cooljai565656
      @cooljai565656 4 года назад +1

      @@justmeandopensource ohh i got it. thank you :)

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@cooljai565656 You are welcome.

  • @premkumarwaghmare4059
    @premkumarwaghmare4059 4 года назад +1

    How to create a private repository for internal charts and use it to deploy via helm

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Prem, thanks for watching. Not sure if you had watched my other video about setting up a local helm chart repository.
      ruclips.net/video/hSk_r-CCvLE/видео.html

  • @vamseenath1
    @vamseenath1 4 года назад

    Hi Venkat, i tried all the same settings what you have used in jenkins. connection established successfully,
    The build job was success without slave pod generation. It ran on the master pod itself. Any advise plz?

  • @kartheekvasireddy5233
    @kartheekvasireddy5233 5 лет назад +1

    Hi Venkat, i tried creating exact setup but its not working, problem is many of the jenkins plugins failed to load. I am running kubernetes v1.14.1. My jenkins pod is staring and am able to login via gui but i cannot find kubernetes plugins and few more which are needed. Not sure if it is problem with chart, the version i am using is jenkins-1.1.5. Below are logs from jenkins pod.
    java.io.IOException: Pipeline: Job version 2.31 failed to load.
    - workflow-support version 2.21 is missing. To fix, install version 2.21 or later.
    java.io.IOException: Pipeline: Stage View Plugin version 2.11 failed to load.
    - pipeline-rest-api version 2.11 is missing. To fix, install version 2.11 or later.
    - Pipeline: Job version 2.31 failed to load. Fix this plugin first.
    ava.io.IOException: Plain Credentials Plugin version 1.5 failed to load.
    - credentials version 2.1.16 is missing. To fix, install version 2.1.16 or later.
    SEVERE: Failed Loading plugin SSH Credentials Plugin v1.16 (ssh-credentials)
    java.io.IOException: SSH Credentials Plugin version 1.16 failed to load.
    - credentials version 2.1.17 is missing. To fix, install version 2.1.17 or later.
    SEVERE: Failed Loading plugin Kubernetes plugin v1.14.0 (kubernetes)
    java.io.IOException: Kubernetes plugin version 1.14.0 failed to load.
    - durable-task version 1.16 is missing. To fix, install version 1.16 or later.
    - variant version 1.0 is missing. To fix, install version 1.0 or later.
    - apache-httpcomponents-client-4-api version 4.5.3-2.0 is missing. To fix, install version 4.5.3-2.0 or later.
    - jackson2-api version 2.7.3 is missing. To fix, install version 2.7.3 or later.
    - cloudbees-folder version 5.18 is missing. To fix, install version 5.18 or later.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Kartheek, thanks for watching this video. Things might have changed slightly since I recorded this video. I will re-test this video today and will let you know if anything needs to be changed.
      Thanks,
      Venkat

  • @aneelkumar9786
    @aneelkumar9786 5 лет назад +1

    Hi Venkat, when can i expect code walk through of helm charts?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Aneel, thanks for watching this video. I haven't really concentrated on helm charts since my couple videos around helm. I have loads of topics in my todo list to cover and I will add it to my list. Thanks.

  • @ramuirla5681
    @ramuirla5681 4 года назад +1

    Hi venkant,
    How to join docker container into Jenkins slave.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Ramu, thanks for watching. You can't join a docker container into a Jenkins slave. What you can do is install the Docker plugin and configure it so that jobs can be run on docker containers.
      devopscube.com/docker-containers-as-build-slaves-jenkins/

  • @moaijaz
    @moaijaz 5 лет назад +1

    Hi Venkat what is way to make you master HA, can we create more replicas? also since we are using PV if that pod gets killed will a new master attach to that PV?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Mohsin, I believe you are talking about Jenkins master and not Kubernetes master.
      I don't think there is a HA solution for open source Jenkins. If you use CloudBees you get that High Availability.
      But you can implement some form of high availability architecture. I indeed tried creating more replicas and mounting the same storage pv on all of them with readwritemany option. But that didn't work very well. Because storage is tied to a particular instance. If you create a job through one master jenkins pod and you go to the other master pod, you won't see it. You will have to reload the configuration from disk.
      However, you can have a active/passive architecture something like the below. You can have the other pod as standby and if the primary fails, you can update the load balancer to point to the standby one.
      endocode.com/img/blog/jenkins-ha-setup_concept.png
      I think it involves some amount of planning.
      In my steps in this video, yes when the pod is killed, the new pod will attach to the pv.
      Hope this makes sense.
      Thanks.

    • @moaijaz
      @moaijaz 5 лет назад +1

      @@justmeandopensource thanks yes my question was about the jenkins master pod, so if i delete that jenkins master pod manually i know it will create a new one but will all the configuration be the same? like all jobs will show up?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      @@moaijaz Yes. I believe it will retain everything and I think I have tested that. If not sure just deploy another jenkins and test it. Thanks.

  • @1sbollap
    @1sbollap 5 лет назад +1

    what will happen to the jenkins jobs you have created. will they persist? i guess not right.. is there a config to save those jobs

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi, the slave agent (pod) will get created and destroyed dynamically whenever a job is scheduled in Jenkins. For job persistence, please consider using persistent volumes. Thanks.

  • @elabeddhahbi3301
    @elabeddhahbi3301 3 года назад +1

    i have problem with client nfs provisioner every time i try to apply the deploymen.yaml i got this message (error:wrong fs type , bad option, bad superblock on 192.68......)

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Hi, thanks for watching. What process did you follow for deploying the nfs provisioner? Did you follow one of my videos? I need more details on your issue to be able to help. How is your setup different to the one I used in my videos.

    • @elabeddhahbi3301
      @elabeddhahbi3301 3 года назад

      @@justmeandopensourceEvents:
      Type Reason Age From Message
      ---- ------ ---- ---- -------
      Normal Scheduled 82s default-scheduler Successfully assigned default/nfs-client-provisioner-1617435131-6d5cd955cc-bdxzk to worker2
      Warning FailedMount 19s (x8 over 83s) kubelet MountVolume.SetUp failed for volume "nfs-client-root" : mount failed: exit status 32
      Mounting command: mount
      Mounting arguments: -t nfs 192.168.122.30:/srv/nfs/kubedata /var/lib/kubelet/pods/57932896-a049-483a-8803-be36726efa07/volumes/kubernetes.io~nfs/nfs-client-root
      Output: mount: wrong fs type, bad option, bad superblock on 192.168.122.30:/srv/nfs/kubedata,
      missing codepage or helper program, or other error
      (for several filesystems (e.g. nfs, cifs) you might
      need a /sbin/mount. helper program)
      In some cases useful info is found in syslog - try
      dmesg | tail or so.

  • @watsav3160
    @watsav3160 4 года назад +1

    Hi Venkat
    Can we customize the folder name that is getting created "default-myjenkins-pvc*" to something like "myjenkins_homedirectory" ??
    Because I don't want that big-name with PVC -* for the folder that is getting created under nfs.
    Waiting for your reply !!!
    thanks
    SSV

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Sri, thanks for watching. I am not entirely sure if thats possible.

    • @watsav3160
      @watsav3160 4 года назад

      @@justmeandopensource Thanks for the response
      Can you suggest me to someone to look at this request ?

  • @raghavendravenkat4274
    @raghavendravenkat4274 2 года назад

    Hey Venkat! You are a genius. Can I request you to DM me as I have a request for you?

  • @Siva-ur4md
    @Siva-ur4md 5 лет назад +1

    Hi Venkat, I am actually trying with Helm upgrade, I have created webserver helm chart locally with image "username/repo:test1" it will print "Testing", so again I have built a new image with the same name "username/repo:test1", this time it should print "Testing...", but when I run `helm upgrade releasename chartname` it just creating different revision version not creating pods with new version of image.
    my imagepullpolicy: Always, and if I modify an image name in pod spec to some other "username/repo:test2", it is creating new revision with a newly mentioned image. I have changed the version in Chart.yml still not working... Could you please help me if I am doing anything wrong here...

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      Hi Siva, thanks for your question. I haven't explored creating helm chart myself although that is in my to do list. I am going to try this when I get some time and will share the details. Thanks.

  • @MrNiceseb
    @MrNiceseb 3 года назад

    Any video on Kafka in Kubernetes?

  • @praveensingh7327
    @praveensingh7327 5 лет назад +1

    Hey Venkat,The pod is getting created but container not starting with errors as below. I'm suspecting something to do with RBAC although I've given tiller cluster admin access as per your video. I'm running Jenkins as root. Thanks.# kubectl logs myjenkins-789946c4-6lv4h -c copy-default-config
    cp: cannot create regular file '/var/jenkins_home/config.xml': Permission denied
    cp: cannot create regular file '/var/jenkins_home/jenkins.CLI.xml': Permission denied
    cp: cannot create regular file '/var/jenkins_home/jenkins.model.JenkinsLocationConfiguration.xml': Permission denied

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Praveen,
      Could you first try the steps exactly shown in this video and confirm that it is working? If you are having problems following exact steps in this video, we need to fix that first. If you got everything working as per this video, then you can try you customization one at a time to see where it fails.
      Thanks,
      Venkat

  • @akshay_metgud
    @akshay_metgud 3 года назад

    Hi Venkat, Hope you are doing good. When describe pod - i see error "Readiness probe failed: HTTP probe failed with status code: 500" . Could you please help me on the same? I using latest stable/jenkins.

  • @derk6831
    @derk6831 4 года назад +1

    Hi, when I ran jekins using helm on aws eks for a simple cicd using cloudbee docker pushing and kubernetes-cd, I got an error saying docker daemon not running inside of jenkins container. Even I don't use helm just jenkins container on k8s same thing...the build and pushing the image container to docker hub works fine with jenkins and docker installed on server....

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Kade, thanks for watching. Yes, you will see this error when you are doing docker commands like docker push/build/tag within the jenkins slave container. In this setup, jenkins slaves are pod which are already a docker container and you won't be able to install docker runtime inside the container. You will have to use the host machines docker runtime as shown in this video. You only need to have docker binary installed inside the container. And the container needs to mount /var/run/docker.socket on the host machine where it runs. So all docker commands will get executed on the docker runtime on the host. So I would advise you to tweak your slave container image to have docker binary installed and properly mount the host system docker directory in the container. Cheers.

    • @derk6831
      @derk6831 4 года назад +1

      Thanks you very much for replying me.
      I will work on it. Any resources that you can recommend me to use...I already have linuxacademy subscription.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@derk6831 I believe you already got it working. Just saw your other comment. Cool.

  • @ilgiztimrukov
    @ilgiztimrukov 3 года назад

    Hello! I want to use it in the same way as "managed-nfs-server", but it crashes with the error "Startup probe failed: Get "10.244.2.181:8080/login": dial tcp 10.244.2.181:8080: connect: connection refused". There are enough resources and the folder was created on the nfs server. Аny ideas?

  • @atulbarge5808
    @atulbarge5808 4 года назад +2

    Hi good morning venket , it is my request you to create video on install Nexus repository, sonarqube, and any applications development on kubernets container I want my Nexus and sonarqube on kubernets and integrate to my Jenkins , please
    Thanks

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Atul, thanks for watching. I will add these to my list. But I already recorded videos for the next two months. Cheers.

  • @ZahidKhan-hi1gb
    @ZahidKhan-hi1gb 2 года назад

    Hi can you please this deployment using ansible-playbook make a video on that, please

  • @systemadministrator8192
    @systemadministrator8192 5 лет назад +1

    Hello Venkat, thank you for you work ! Unfortunately I get this errors:
    helm install stable/jenkins --values /tmp/jenkins.value --name myjenkinsError: render error in "jenkins/templates/deprecation.yaml": template: jenkins/templates/deprecation.yaml:277:10: executing "jenkins/templates/deprecation.yaml" at

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      I will try this video today and let you know if anything has changed. If you don't have dynamic volume provisioning, you can disable persistent storage in the value.yaml file before installing it.
      Thanks,
      Venkat

    • @jitinkumar
      @jitinkumar 5 лет назад +1

      you have to replace the words rbac.install to rbac.create

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      @@jitinkumar Thanks for suggesting the solution. I haven't had a chance to look back my video. But from the error he pasted, it seems the solution. Cheers.

  • @sanketj1112
    @sanketj1112 4 года назад +1

    I saw that when you type kubectl it will autogenerate some commands. How can you do that?

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Sanket, thanks for watching.
      I use Zsh shell with oh-my-zsh on top of it. I used zsh-autosuggestions plugin for the auto-completion of commands based on my history. Also if using oh-my-zsh, there are lots of plugins that you can use. I used kubectl plugin to give me completion commands. Cheers.

    • @sanketj1112
      @sanketj1112 4 года назад +1

      Thanks man .. will do that.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      You are welcome.

  • @gauravagnihotri4912
    @gauravagnihotri4912 4 года назад +1

    Got an error "Jenkins-slave-23ctyhyhxx’’ is offline and Agent slave is showing offline
    Any help .. as not find any suitable answer

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Gaurav, thanks for watching. The cluster is actually launching a jenkins slave pod but I believe its terminating the pod immediately before running the job on it. Have a look in jenkins log file for more details. you might have slight misconfiguration in kubernetes plugin where you specify the pod/container template. Double check that configuration. Cheers.

  • @stilianstoilov3728
    @stilianstoilov3728 4 года назад

    Hi Venkat,
    It seems that helm repo: stable/jenkins cannot be added/install anymore -> hub.helm.sh/charts/stable/jenkins
    I've found the following official bitnami Jenkins Helm repo, but the Kubernetes plugin is not automatically installed -> hub.helm.sh/charts/bitnami/jenkins
    After installing the K8s plugin and configuring it I'm constantly seeing in Jenkins pod the following log "o.c.j.p.k.KubernetesClientProvider$SaveableListenerImpl#onChange: Invalidating Kubernetes client: kubernetes null" which I think leads to a problem with my config.
    Working on K8s cluster 1.17.8, Helm v3, and Jenkins 2.235.4.
    So can you find any workaround for stable/jenkins repo or explain a little bit the K8s plugin configuration?
    BR,
    Stilian Stoilov

  • @MsBrati
    @MsBrati 4 года назад +1

    Helm 3 got tiller removed, will it be possible you can update this one with helm 3 ?

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Brati, The process is going to be the same but the helm install command will be slightly different. Thats it.
      If you are using Helm 3, use the below command to download the values file
      * helm show values stable/jenkins > /tmp/jenkins-values.yaml
      Update the values file as you like and use the below command to install the jenkins chart
      * kubectl create namespace jenkins
      * helm install jenkins stable/jenkins --values /tmp/jenkins-values.yaml --namespace jenkins

    • @JonnieAlpha
      @JonnieAlpha 4 года назад

      @@justmeandopensource Adding a repo first is required, with a command of: helm repo add stable kubernetes-charts.storage.googleapis.com

  • @Gobizen
    @Gobizen 5 лет назад +1

    Hey Venkat,is it possible to add external slaves to Jenkins in this infra

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      Hi Gobi, thanks for watching this video. Yes, its possible to connect external machines as slave agents to Jenkins running inside the k8s cluster. In addition to exposing the Jenkins dashboard as a service, you also need to expose jnlp port 50000 as a service either nodeport or LoadBalancer. Then from the slave you can run slave-agent.jar downloaded from master and connect to LoadBalancer IP. Thanks.

    • @Gobizen
      @Gobizen 5 лет назад +1

      @@justmeandopensource Thank you

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@Gobizen you are welcome. Cheers.

  • @bhalchandramekewar6015
    @bhalchandramekewar6015 4 года назад +1

    Change i did from session:
    1) I created /var/nfs/kubedata instead of /srv/nfs/kubedata.
    2) Installed nfs in kmaster it self rather than host machine (as am using windows with VM)
    I have few of following questions, need your assistance in getting it resolved:
    Where this /persistentvolumes will be located on which node?
    Do we need to create this /persistentvolumes?
    How do i fix following permission issue?
    All the time i am facing issues with permission for PVC to get dynamically allocated with PV.
    Example: It reports an following error for myjenkins pvc
    Normal Provisioning 13m (x6 over 21m) example.com/nfs_nfs-client-provisioner-df5db849c-jcfpb_7ca6f347-0b88-11ea-8b23-06210e6073cd External provisioner is provisioning volume for claim "default/myjenkins"
    Warning ProvisioningFailed 13m (x6 over 21m) example.com/nfs_nfs-client-provisioner-df5db849c-jcfpb_7ca6f347-0b88-11ea-8b23-06210e6073cd failed to provision volume with StorageClass "managed-nfs-storage": unable to create directory to provision new pv: mkdir /persistentvolumes/default-myjenkins-pvc-76bacf4c-dae1-4f50-99ae-c80a4b79048d: read-only file system
    Normal Provisioning 4m1s (x6 over 11m) example.com/nfs_nfs-client-provisioner-df5db849c-g5f24_c0ae6685-0c54-11ea-bacb-babf63973f3b External provisioner is provisioning volume for claim "default/myjenkins"
    Warning ProvisioningFailed 4m1s (x6 over 11m) example.com/nfs_nfs-client-provisioner-df5db849c-g5f24_c0ae6685-0c54-11ea-bacb-babf63973f3b failed to provision volume with StorageClass "managed-nfs-storage": unable to create directory to provision new pv: mkdir /persistentvolumes/default-myjenkins-pvc-76bacf4c-dae1-4f50-99ae-c80a4b79048d: read-only file system
    Normal ExternalProvisioning 80s (x83 over 21m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      HI Bhalchandra, in the dynamic nfs provisioning video, ruclips.net/video/AavnQzWDTEk/видео.html, if you look at the notes I showed, you will find that /persistentvolumes is created in the nfs-client-provisioner pod. But read-only filesystem error you are getting is something different. Try changing the permissions of /var/nfs/kubedata to 777. And make sure you are exporting the share with read-write privilege (in the /etc/exports file use "rw"). Thanks.

    • @bhalchandramekewar6015
      @bhalchandramekewar6015 4 года назад +1

      ​@@justmeandopensource yes its the same as expected of suggested by you. But still problem persist ... not sure why .. was it because /var directory?
      [vagrant@kmaster ~]$ ls -lrt /var/nfs/
      total 0
      drwxrwxrwx. 2 vagrant vagrant 6 Nov 22 05:43 kubedata
      [vagrant@kmaster ~]$ more /etc/exports
      /var/nfs/kubedata * rw,sync,no_subtree,check,insecure
      [vagrant@kmaster ~]$
      Also i found following is set in nfs-client-provisioner pod for directory persistentvolumes:
      drwxrwxrwx 2 1000 1000 6 Nov 22 05:43 persistentvolumes
      When tried changeing permission or ownership getting error as "Read Only file system"
      / # chmod 777 -R persistentvolumes/
      chmod: persistentvolumes/: Read-only file system
      /# chown root:root persistentvolumes/
      chown: persistentvolumes/: Read-only file system

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@bhalchandramekewar6015 I think your content in the /etc/exports file is incorrect.
      You have
      /var/nfs/kubedata * rw,sync,no_subtree_check,insecure
      But it should be
      /var/nfs/kubedata *(rw,sync,no_subtree_check,insecure)
      I am wondering how the nfs service started with /etc/exports not in proper format.
      Thanks.

    • @bhalchandramekewar6015
      @bhalchandramekewar6015 4 года назад +1

      ​@@justmeandopensource Thanks for the update.
      Even after changing export as us suggested ... its same issue.
      $ more /etc/exports
      /var/nfs/kubedata *(rw,sync,no_subtree,check,insecure)
      I believe its because NFS-CLIENT-PROVISIONER POD is getting created with default read only file system ... which is restricting to create directories under /persistentvolumes/
      Not sure if any additional configuration if we can add while creating nfs client provisioner to make it writable file system would help guess.

    • @bhalchandramekewar6015
      @bhalchandramekewar6015 4 года назад +1

      Resolved ...reinstalled nfs ... started working...
      export parameter with no_substree was issue ... due to which it was not working

  • @JonnieAlpha
    @JonnieAlpha 4 года назад

    For those who are working on Helm 3 and you can't see what the helm deployed during the installation of Jenkins, here is the workaround command:
    helm get manifest jenkins -n jenkins | kubectl get -f -

  • @PNGG100
    @PNGG100 5 лет назад +1

    @Juste me and Opensource : plz i need to create jenkinsfile to automate deployment for a java application on Kubernete any help plz

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Jaballi, thanks for watching this video. Probably my other video might help you with the Jenkinsfile for pipeline in Kubernetes.
      ruclips.net/video/4E80gEen-o0/видео.html
      Thanks.

  • @shashankvashishtha2908
    @shashankvashishtha2908 4 года назад

    Hi Venkat
    I have a Jenkins master running as a Pod and the slave running as physical servers (due to some dependency).The slaves connect to the master Jenkins through the tunneling option(by providing the ip address of the master Jenkins ).The issue happens when master Server goes down and switches to some other node in the cluster so the connection with Slave fails. Everytime we have to change the manual configuration(Tunneling ip).Can this problem be handled in some other way?

    • @justmeandopensource
      @justmeandopensource  4 года назад

      Hi Shashank, thanks for watching. In this video, my jenkins master is inside the k8s cluster deployed through Helm. The slave pods are also configured in the Kubernetes plugin configuration in Jenkins master and slaves also run inside the k8s cluster.
      There are two services. myjenkins and myjenkins-agent. myjenkins service is configured to be NodePort service on port 32323. So I can access the jenkins dashboard from outside the cluster by connecting to any worker node on port 32323. And the myjenkins-agent service is of type clusterIP and listening on port 50000. This is the service that slave pods will use to establish connection with Jenkins master. Since my slave pods are inside the k8s cluster, myjenkins-agent service can just me clusterIP. In the kubernetes plugin configuration, I configure slave pods to connect to "myjenkins-agent:50000". This is never going to change as I am accessing it by name "myjenkins-agent" service.
      I am not sure how your physical slaves are connecting to the jenkins master. How are you exposing the myjenkins-agent service?

  • @redazaza2729
    @redazaza2729 4 года назад +1

    Maybe Slow because of the NFS Persistent Volume, not cpu or memory.

  • @MsBrati
    @MsBrati 4 года назад +1

    My pod is in init state, using pvc, log showing successfully created container , but remain in init state. any idea? Any place where can I share my jenkins.values file?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Is your dynamic volume provisioning setup working fine? You can share your jenkins.values file in pastebin.com and share the link. I can try it when I have some time. Cheers..

    • @MsBrati
      @MsBrati 4 года назад +1

      @@justmeandopensource tried for last 4 days, in different ways, container still init state..pvc created successfully with ebs volume. Not able to trace.

    • @MsBrati
      @MsBrati 4 года назад +1

      @@justmeandopensource here is the the pvc yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
      name: jenkins-pvc
      namespace: default
      spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Filesystem
      resources:
      requests:
      storage: 200Gi
      storageClassName: standard
      here is the storage class:
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
      name: standard
      provisioner: kubernetes.io/aws-ebs
      parameters:
      type: gp2
      fsType: ext4
      reclaimPolicy: Retain
      allowVolumeExpansion: true
      mountOptions:
      - debug
      volumeBindingMode: Immediate
      --
      here is the pv
      apiVersion: v1
      kind: PersistentVolume
      metadata:
      name: jenkins-pv
      spec:
      capacity:
      storage: 200Gi
      volumeMode: Filesystem
      accessModes:
      - ReadWriteOnce
      - ReadOnlyMany
      persistentVolumeReclaimPolicy: Retain
      storageClassName: standard
      hostPath:
      path: /data/shared/jenkins
      ----
      Here is the command am using:
      helm install --set persistence.existingClaim=jenkins-pvc --set master.serviceType=NodePort stable/jenkins --generate-name
      error : Pod init state.
      any help?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@MsBrati I don't see any problem with any of your manifests or the helm install command. It will be hard to troubleshoot your issue remotely.
      What does kubectl describe deploy tell you?

    • @MsBrati
      @MsBrati 4 года назад +1

      @@justmeandopensource Name: myjenkins
      Namespace: default
      CreationTimestamp: Wed, 08 Jan 2020 11:49:34 +0000
      Labels: app.kubernetes.io/component=jenkins-master
      app.kubernetes.io/instance=myjenkins
      app.kubernetes.io/managed-by=Tiller
      app.kubernetes.io/name=jenkins
      helm.sh/chart=jenkins-1.9.13
      Annotations: deployment.kubernetes.io/revision: 1
      Selector: app.kubernetes.io/component=jenkins-master,app.kubernetes.io/instance=myjenkins
      Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
      StrategyType: Recreate
      MinReadySeconds: 0
      Pod Template:
      Labels: app.kubernetes.io/component=jenkins-master
      app.kubernetes.io/instance=myjenkins
      app.kubernetes.io/managed-by=Tiller
      app.kubernetes.io/name=jenkins
      helm.sh/chart=jenkins-1.9.13
      Annotations: checksum/config: a14b7eab49dd6072395f8f79398511929d24bb93ac02a19d06f93d42ed6d7a7f
      Service Account: myjenkins
      Init Containers:
      copy-default-config:
      Image: jenkins/jenkins:lts
      Port:
      Host Port:
      Command:
      sh
      /var/jenkins_config/apply_config.sh
      Limits:
      cpu: 2
      memory: 4Gi
      Requests:
      cpu: 50m
      memory: 256Mi
      Environment:
      ADMIN_PASSWORD: Optional: false
      ADMIN_USER: Optional: false
      Mounts:
      /tmp from tmp (rw)
      /usr/share/jenkins/ref/plugins from plugins (rw)
      /usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
      /var/jenkins_config from jenkins-config (rw)
      /var/jenkins_home from jenkins-home (rw)
      /var/jenkins_plugins from plugin-dir (rw)
      Containers:
      jenkins:
      Image: jenkins/jenkins:lts
      Ports: 8080/TCP, 50000/TCP
      Host Ports: 8080/TCP, 50000/TCP
      Args:
      --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
      --argumentsRealm.roles.$(ADMIN_USER)=admin
      --httpPort=8080
      Limits:
      cpu: 2
      memory: 4Gi
      Requests:
      cpu: 50m
      memory: 256Mi
      Liveness: http-get :http/login delay=90s timeout=5s period=10s #success=1 #failure=5
      Readiness: http-get :http/login delay=60s timeout=5s period=10s #success=1 #failure=3
      Environment:
      POD_NAME: (v1:metadata.name)
      JAVA_OPTS:
      JENKINS_OPTS:
      JENKINS_SLAVE_AGENT_PORT: 50000
      ADMIN_PASSWORD: Optional: false
      ADMIN_USER: Optional: false
      Mounts:
      /tmp from tmp (rw)
      /usr/share/jenkins/ref/plugins/ from plugin-dir (rw)
      /usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
      /var/jenkins_config from jenkins-config (ro)
      /var/jenkins_home from jenkins-home (rw)
      Volumes:
      plugins:
      Type: EmptyDir (a temporary directory that shares a pod's lifetime)
      Medium:
      SizeLimit:
      tmp:
      Type: EmptyDir (a temporary directory that shares a pod's lifetime)
      Medium:
      SizeLimit:
      jenkins-config:
      Type: ConfigMap (a volume populated by a ConfigMap)
      Name: myjenkins
      Optional: false
      secrets-dir:
      Type: EmptyDir (a temporary directory that shares a pod's lifetime)
      Medium:
      SizeLimit:
      plugin-dir:
      Type: EmptyDir (a temporary directory that shares a pod's lifetime)
      Medium:
      SizeLimit:
      jenkins-home:
      Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
      ClaimName: jenkins-pvc
      ReadOnly: false
      Conditions:
      Type Status Reason
      ---- ------ ------
      Available False MinimumReplicasUnavailable
      Progressing True ReplicaSetUpdated
      OldReplicaSets: myjenkins-6b9667ddf8 (1/1 replicas created)
      NewReplicaSet:
      Events:
      Type Reason Age From Message
      ---- ------ ---- ---- -------
      Normal ScalingReplicaSet 4m46s deployment-controller Scaled up replica set myjenkins-6b9667ddf8 to 1

  • @singhsummer
    @singhsummer 4 года назад

    HI, is it possible to have small demo on jenkins operator ?

    • @justmeandopensource
      @justmeandopensource  4 года назад

      Hi Sumer, thanks for watching this video. I have already did some work on jenkins operator and is in the process of making. You will soon see a video for it. Cheers.

  • @jameeappiskey5830
    @jameeappiskey5830 3 года назад +1

    Please make a video on kubernetes plugin and DinD

    • @justmeandopensource
      @justmeandopensource  3 года назад +2

      Hi Jamee, thanks for watching.
      I did a video long time back on kubernetes plugin management using krew.
      ruclips.net/video/-HMbSqEQPpk/видео.html
      Not sure if thats what you wanted.
      Also for DinD docker in docker, i did KinD.
      ruclips.net/video/4p4DqdTDqkk/видео.html

    • @jameeappiskey5830
      @jameeappiskey5830 3 года назад +1

      @@justmeandopensource Thank you! :)

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      @@jameeappiskey5830 You are welcome

  • @kumarvedavyas879
    @kumarvedavyas879 4 года назад +1

    Hey...
    I think you live in london. Are you safe???

    • @justmeandopensource
      @justmeandopensource  4 года назад +3

      Hi Kumar, thanks for checking. Yes I am safe and staying indoor. Hope you are keeping safe as well.

    • @kumarvedavyas879
      @kumarvedavyas879 4 года назад +1

      @@justmeandopensource Yeah... You have to teach us a lot 😂😂

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@kumarvedavyas879 Sure. No worries.

  • @gabriell.berlotperalta1452
    @gabriell.berlotperalta1452 4 года назад

    I tried several times and it's not working, jenkins pod status is "init:0/1" and I could see 2 restarts.
    Run kubectl logs jenkins-7dd657bdb7-9vj66 -n jenkins and the output is: Error from server (BadRequest): container "jenkins" in pod "jenkins-7dd657bdb7-9vj66" is waiting to start: PodInitializing.
    Any ideas ?

    • @justmeandopensource
      @justmeandopensource  4 года назад

      Hi Gabriel, thanks for watching.
      I just tried this video and it worked perfectly fine.
      dpaste.com/18BV3VE
      You won't be able to view the logs of jenkins container as it hasn't been created yet. SO you need to find out why the pod is still in initializing state. It might be waiting for persistent volumes to be provisioned. One quick test you can do is just disable persistent storage in jenkins values file. Set persistence to false everywhere in that values file and do a helm install. If it works then you have a problem with your storage provisioner. Cheers.

    • @gabriell.berlotperalta1452
      @gabriell.berlotperalta1452 4 года назад

      @@justmeandopensource I was able to find that the problem is related with the plugins, If a specify an empty list of plugins it will worked, else It will try to download the plugins for ever and the following error is displayed in the jenkins pod logs: "Failure (28) Retrying in 1 seconds..." Basically, my host machine is a windows10 and from there, I created 3 vms using your vagrant scripts. If I try to run the commands that downloads the plugins directly from the K-node it worked fine, but seemed to be there are issues when running withing the pod (it appears to be a dns issue). Don't know if there ir a value in the clusterZone: "cluster.local" to can solve this problem or maybe I have to add a proxy value in the initContainerEnv. Any advise will be welcome. Thanks!!!!

    • @justmeandopensource
      @justmeandopensource  4 года назад

      @@gabriell.berlotperalta1452 Okay. So I guess its trying to download plugins but couldn't reach the external sites. Sounds like a pod networking issue.

    • @AhmedSalem-ib3qp
      @AhmedSalem-ib3qp 4 года назад +1

      @@justmeandopensource I faced the same issue and was fixed as i got connected to VPN by mistake, i deleted all the resources related to jenkins and re-installed using helm. and worked perfectly. So yes i confirm these kinds of errors are related to networking issues.
      Cheers from Egypt, i am a big fan of your videos and got addicted to your learning methodology, please stay on track as you really have a lot of fans.
      Ahmed Salem.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@AhmedSalem-ib3qp That's great to hear. Many thanks for your interest in my videos. I will surely get this going. Cheers.

  • @Pallikkoodam94
    @Pallikkoodam94 5 лет назад +1

    Thank you for your video, I am getting the following error while executing $ helm init --service-account tiller
    Error: unknown flag: --service-account

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      Hi Ajeesh, thanks for watching this video.
      Please watch my getting started with helm video in the below link.
      ruclips.net/video/HTj3MMZE6zg/видео.html
      I believe you have downloaded latest version of Helm, possibly version 3. With v3, helm dropped tiller component. Please use the version I used in the above video link and you should be good.
      Thanks

    • @Pallikkoodam94
      @Pallikkoodam94 5 лет назад +1

      @@justmeandopensource Yes I watched see my outputs
      $ kubectl -n kube-system create serviceaccount tiller
      Error from server (AlreadyExists): serviceaccounts "tiller" already exists
      $ kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
      Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "tiller" already exists
      and the next command i am getting this error.
      $ helm init --service-account tiller
      Error: unknown flag: --service-account

    • @Pallikkoodam94
      @Pallikkoodam94 5 лет назад +1

      Oh okay, I got it
      $ helm init --service-account tiller --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | sed 's@ replicas: 1@ replicas: 1
      selector: {"matchLabels": {"app": "helm", "name": "tiller"}}@' | kubectl apply -f -
      deployment.apps/tiller-deploy created
      service/tiller-deploy created
      I think this is an issue with kubernetes version and helm

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@Pallikkoodam94 That's it. I forgot to ask you what k8s version you were using. With 1.16.0 as you found apiversions have changed. Glad that you figured it out. Cheers.

  • @atulbarge5808
    @atulbarge5808 4 года назад +1

    Hi venket

  • @atulbarge5808
    @atulbarge5808 4 года назад

    It is very difficult to run my Jenkins on port 443 and ssl certificate can you please help me for that I have my Jenkins running on port 80 successfully but how I run on 443 please let me know. Thanks

    • @justmeandopensource
      @justmeandopensource  4 года назад

      Hi Atul, how about configuring an nginx reverse proxy with a self signed certificate?

    • @atulbarge5808
      @atulbarge5808 4 года назад

      @@justmeandopensource thanks I am also thinking in same way.

    • @justmeandopensource
      @justmeandopensource  4 года назад

      @@atulbarge5808 Cool.

    • @atulbarge5808
      @atulbarge5808 4 года назад

      @@justmeandopensource hi venket , I don't know how to make reverse proxy for my Jenkins , on nginx because my Jenkins on kubernets cluster and how I make it reverse proxy and where I need to change on nginx configuration file please give me ideas.

    • @justmeandopensource
      @justmeandopensource  4 года назад

      @@atulbarge5808 Okay. Let me try it in my environment and if possible I will make a video of it. I believe it will be using lets-encrypt for certificates and nginx ingress.