[ Kube 26 ] Prometheus monitoring for Kubernetes Cluster and Grafana visualization

Поделиться
HTML-код
  • Опубликовано: 19 дек 2024

Комментарии • 329

  • @akk2766
    @akk2766 5 лет назад +5

    Absolute must watch for anyone attempting to setup monitoring on their k8s cluster. Been hunting this information down for several weeks now and although there are numerous sites on web talking about the topic and showing off some cool looking screenshots, none comes close to the perfect job you've done here. Keep this good work going, mate.
    Awesome stuff.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Thanks for watching this video and taking time to comment. You made my day. Cheers.

    • @waterkingdom9839
      @waterkingdom9839 5 лет назад +1

      Hello Venkat, can you also add a video using Persistent Volume approach and not just Dynamic Provisioning? Will be easy for the viewers not using NFS based approach.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      @@waterkingdom9839 Hi, since I am using bare-metal, I can't use any other storage provisioners. NFS is the easiest one. And I wanted to show it the proper way for persistence. May I know what do you mean by persistent volume approach? So if I don't use dynamic provisioning, I can make use of hostPath volumes, but it will be on individual worker nodes. So we have to define nodeSelector to schedule the pod to the same host and it will add more complexity. So I thought of going with standard approach.
      Thanks.

  • @abrahamolamipo6449
    @abrahamolamipo6449 5 лет назад +11

    Hi Venkat, just want to say big thanks to you for your videos...you've been of great help to me and many out there. Keep up the good work.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      Hi Abraham, thanks for watching this video and taking time to comment and appreciate. Cheers

  • @techelevatesolutions
    @techelevatesolutions 3 года назад +2

    Great instruction and impressively efficient use of kubectl commands.
    It's worth mentioning for viewers of this that the steps shown may not work anymore as indicated. The use of Tiller with Helm is now a deprecated approach for implementing prometheus as is the command "helm init". The "stable" repo is also reportedly being phased out of the Helm community's projects

    • @justmeandopensource
      @justmeandopensource  3 года назад +2

      Hi Alonso, Thanks for watching and explaining the current state of play of this process. The problem is that any video I do in this space gets outdated quickly and I have to do followup videos. This is in my list and I will try to get it in. Cheers.

  • @rahulshekharpandey
    @rahulshekharpandey 4 года назад +4

    Very awesome video, hats off to you for explaining everything step by step. Couldn't wait to watch your next video. Thank you.

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Rahul, thanks for watching. Hope you are aware of this whole playlist where I have over 100 videos related to Kubernetes.

  • @rishiabhishektanuku
    @rishiabhishektanuku 4 года назад +2

    Awesome bro..............i was trying to learn the monitoring tools for Kubernetes ...I have seen a lot of tutorials but they are already predefined configuration, which I did not understand...Ur explanation is really great from beginning to end. Thank you.

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Abhishek, thanks for watching. Glad that you found it useful. Cheers.

  • @zongzaili9701
    @zongzaili9701 2 года назад +1

    Your video is very helpful for beginners, especially the second half of the design dashboard, thanks.

  • @sunilkumarmatangi
    @sunilkumarmatangi 3 года назад +1

    Hi Venkat,
    You are doing a wonderful job, It was a great help to learn the Kubernetes in an easy way. keep rocking. all the best. I have shared your many videos with my friends, they are super happy.

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Wow! That is much appreciated. Thanks for your kind act. Cheers.

  • @walidshouman
    @walidshouman 4 года назад +1

    Thanks for the great tutorials,
    Following are some implementation notes:
    - The repo/yamls/nfs-provisioner/deployment.yaml: both ```spec.templates.spec.containers[0].env[2].value``` and ```spec.templates.spec.volumes[0].nfs.path``` shall be changed to the mounted NFS directory
    - Enabling the dynamic provisioning can be done by logging into the master and editing ```/etc/kubernetes/manifests/kube-apiserver.yaml``` to ensure having ```DefaultStorageClass``` in the enable-admission-plugins in ```spec.containers[0].command```, ie ```--enable-admission-plugins=NodeRestriction,DynamicStorageClass```
    - There are 5 prometheus services, ```service/pm-prometheus-server``` is the one we want to set a NodePort for
    - The NFS mount will need to have ```no_root_squash``` option set for Grafana to work, even this is a bad practice
    - The NFS mount will need to have ```insecure``` if the nodes use a NAT adapter, follow this [article](blog.bigon.be/2013/02/08/mount-nfs-export-for-machine-behind-a-nat/)
    - The NFS options I've tried with are ```rw,sync,no_subtree_check,insecure,no_root_squash```
    Thanks again ^_^

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Walid, many thanks for watching and taking time to share your comments. Cheers.

  • @machireddyshyamsunder987
    @machireddyshyamsunder987 4 года назад +2

    Excellent Venkat, I love your training videos . I am learning a lot .

  • @devmrtcbk
    @devmrtcbk 3 года назад +2

    You are amazing. I think I will write a thank you to all of your videos :)

  • @ankitrawat721
    @ankitrawat721 5 лет назад +1

    Hello Venkat, I have been watching and following your videos, all videos are very nicely presented and explained.

  • @richardmetzler7119
    @richardmetzler7119 5 лет назад +1

    in 25:40 I think it is better to use the DNS of your prometheus service my-svc.my-namespace.svc so prometheus.prometheus.svc when I'm not mistaken and use server access.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      HI Richard, thanks for watching. Yes that's how I would actually do. But I only realized it after recording the video.

  • @deanwoods6295
    @deanwoods6295 5 лет назад +1

    Thanks for putting this video out. It's very clear and easy to follow.

  • @guymasumbuko6119
    @guymasumbuko6119 3 года назад +2

    once again and as usual, a great video from Venkat !

  • @SanjeevKumar-nq8td
    @SanjeevKumar-nq8td 2 года назад

    Any plans for an update session on this.

  • @Yesdin007
    @Yesdin007 Год назад

    for people who stuck in creating nfs you have to install nfs-kernel-server on the target machine which you want to use for nfs after that you create the path /srv/nfs/kubedata and you edit the file /etc/exports adding this line "/srv/nfs/kubedata *(rw,sync,no_subtree_check)" this will allow all machine to connect to /srv/nfs/kubedata if you want to autorise just certain of hosts you can change * by the ip address on /etc/exports , once you finish this you reload the service sudo systemctl reload nfs-kernel-server
    , your pod then will be in running mode

  • @nagendrareddybandi1710
    @nagendrareddybandi1710 4 года назад +1

    HI Sir..
    Thanks for this Video on this stuff..
    Its very nice and excellent.

  • @rupeshpaneerselvam2958
    @rupeshpaneerselvam2958 5 лет назад +2

    Thanks much .. I have deployed on aws and Its working..!!!!

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Rupesh, thanks for watching this video and confirming that it is working on AWS environment. Good to hear that as I haven't tried it there. Thanks.

  • @lavanyaanbu1234
    @lavanyaanbu1234 5 лет назад +1

    Very useful video to begin with monitoring..

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Lavanya, thanks for following this Kubernetes series. Hope you found it useful. Thanks

  • @sandeepmishra2
    @sandeepmishra2 4 года назад +1

    Thank you so much for sharing.
    Nicely explained .

  • @vedicbhakt
    @vedicbhakt 4 года назад +1

    Thanks Venkat. Please let me know if you get the chance to look k8s-zabbix integration. Thx.

  • @lavanyaanbu1234
    @lavanyaanbu1234 5 лет назад +2

    It would be good if you could post a video on Basic troubleshooting in K8S to start with.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Viewers usually comment if they have any issues and I have been helping them. And as you suggested I think it would make sense to post troubleshooting videos too. Will keep this in mind. Thanks

  • @huidey3159
    @huidey3159 4 года назад +1

    awesome video, straight forward and very useful - like always. thanks for sharing, Venkat.
    Question: after enable the grafana persistence, the pod start will be failed and showing me `Init:CrashLoopBackOff` log shows ```Error from server (BadRequest): container "grafana" in pod "grafana-9f7c7f7ff-8vz9n" is waiting to start: PodInitializing```
    do you have any suggestions about it. change to `service.persistence.enabled=false` it just working fine but no persistence... thanks in advance.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Huide, thanks for watching. So without persistence it is working fine? You got to sort your persistent volume provisioning first if you need persistence.

  • @zaheerhussain5311
    @zaheerhussain5311 4 года назад +2

    Hi
    Any video on Prometheus operator with dynamic persistence volume using NFS
    Regards
    Zaheer

  • @kishoremummaleti1791
    @kishoremummaleti1791 3 года назад +1

    Is it possible to manually install of metric beat into cluster for monitor in kibana

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Not sure what you mean. Can you explain with bit more kore context?

  • @ramakanthsri183
    @ramakanthsri183 3 года назад +3

    High Five Boss :) . Great Video

  • @magrag1987
    @magrag1987 3 года назад

    @Just me and Opensource. it was wonderful video, thank you.. can you make a video of getting metrics from a database which is not in cluster. how the exporter will work and such stuff will come. thank you

  • @swarajgupta3087
    @swarajgupta3087 4 года назад +1

    Hello Venkat, I want to setup Promentheus and Grafana on a machines which doesn't have internet connectivity. I can not use Helm as these are bare metal machines, but have Kubernetes cluster available on them. How could I setup Prometheus/Grafana in that case. Thanks for everything !

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Swaraj, thanks for watching.
      > I can not use Helm as these are bare metal machines
      Helm can be used on any machine.
      > Have kubernetes cluster on them
      How are you running Kubernetes clusters on machines that doesn't have internet access. It needs to download the docker images right?

  • @chytrak4060
    @chytrak4060 4 года назад +2

    very good explanation

  • @Siva-ur4md
    @Siva-ur4md 5 лет назад +2

    Hello Venkat, Thanks for the video, may I request you to make a video on Prometheus queries(functions Like rate, Increase, sum), it would help us to understand Prometheus and Grafana better. Thanks,

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Siva, i would love to do those topics. Let me see if I can. At the moment focusing on Kubernetes and AWS series. Cheers.

  • @rajeshbastia8502
    @rajeshbastia8502 5 лет назад +1

    Hi Venkat, Struggling to install it on Helm v3 and new updated Prometheus file. Please provide the command to install it on the namespace and also let me know where to modify exactly in the updated prometheus file

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      HI Rajesh, thanks for watching.
      I know things have changed slightly since I recorded this video.
      Let me give you the commands for Helm 3.
      Prometheus:
      First check out the values file, I used helm inspect for Helm 2. Now you need to use helm show for Helm 3.
      $ helm show values stable/prometheus > /tmp/prometheus.values
      Update the values file as per your need. You can disable persistence or change the size of the persistent volume.
      Finally install it. In Helm 2 I used --name to specify the release name, with Helm 3 --name is deprecated. And more importantly the namespace has to be created first. Helm 3 doesn't create a namespace for you.
      $ helm install prometheus stable/prometheus --namespace prometheus --values /tmp/prometheus.values.
      Do this for Grafana as well.
      Cheers.

    • @rajeshbastia8502
      @rajeshbastia8502 5 лет назад +1

      @@justmeandopensource Thanks for the reply and the commands Venkat. And another thing what I wanted to know is the Prometheus Yaml file is changed with the new update. please let me know which section of the file is to be modified.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@rajeshbastia8502 I don't think there is a lot changed. In this video, I only updated the service type to NodePort and set the nodePort value to 32322. Persistence was already enabled. Just modify the service type from ClusterIp to NodePort (line number 273), uncomment line nuber 272 and set the nodePort. Thats it. Cheers.

  • @Tshadowburn
    @Tshadowburn 5 лет назад +1

    hi Venkat :) I'm sorry to bother you again, I'm trying to set up Prometheus too but since I'm using helm 3 I do not need tiller now, but the thing my pods of prometheus-1575459249-server and prometheus-1575459249-alertmanager don't start when I describe them I see that : pod has unbound immediate PersistentVolumeClaims . thank you if you have any info on that

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      Hi, Don't be sorry. Please ask questions. It helps me as well. So in your case all the pods are pending because it can't get the persistent volumes. Did you install the dynamic nfs provisioning in your cluster? I have done a video on that.
      ruclips.net/video/AavnQzWDTEk/видео.html
      Once you have installed dynamic nfs provisioning, then your persistent volume claims will be able to get persistent volumes automatically from NFS. In the prometheus and grafana values file, you will have to uncomment and specify the storage class name under persistence section. If you don't have or don't want to enable dynamic volume provisioning, you can disable persistence in the prometheus and grafana values file by setting the option false.
      Thanks.

    • @Tshadowburn
      @Tshadowburn 5 лет назад +1

      @@justmeandopensource thank you I will try that and let you know if I managed to do it

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@Tshadowburn Cool.

  • @visheshkumarsingh9818
    @visheshkumarsingh9818 4 года назад +2

    Can you make a tutorial for monitoring of the external services which are running on our Kubernetes cluster,
    For example, like MongoDB, MySQL,etc and then monitoring it with our Prometheus-operator

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi VIshesh, thanks for watching. external services running in the Kubernetes cluster? Are they running outside or within Kubernetes cluster?

    • @visheshkumarsingh9818
      @visheshkumarsingh9818 4 года назад +1

      @@justmeandopensource Thanks for the response, like in recent days, I researched a lot about, how can we monitor other services that are not included in (default like node-exporter, etc)Prometheus-operator like database services(or BlackBox). So, the approach which I found out was to create the Service monitor for that particular thing. But this approach also didn't work out for me, like I am missing something. Also, it's within the cluster with a different namespace.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@visheshkumarsingh9818 I see. There are exporters available for MySQL, MongoDB that helps collect database specific metrics and push it to Prometheus. I have explored them outside of Kubernetes. But haven't had a chance to try within Kubernetes.

    • @visheshkumarsingh9818
      @visheshkumarsingh9818 4 года назад +1

      @@justmeandopensource Yes, I know that about different exporters available, I installed them via helm, but what do we need to specify in Prometheus-operator for it?
      I installed it, but not able to see the metrics.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@visheshkumarsingh9818 If you installed the exporters, then you must have an endpoint (usually ). You will have to add that endpoint to Prometheus configuration to enable Prometheus to scrape the metrics from that endpoint. Depending on how you deployed Prometheus, it might just be a matter of updating the configmap and restarting the pod(s).

  • @lavanyaanbu1234
    @lavanyaanbu1234 5 лет назад +1

    Hi, I have a query. I have installated Jenkins, Grafana, Prometheus, Spinnaker by creating a( Dynamic )Persistant Volume using helm chart. I am trying to define Disaster Recovery plan for it. How to backup all the resources inside the cluster.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Lavanya,
      Backup and Recovery is another topic in my list. I haven't explored the options yet. But the below link looks promising.
      github.com/heptio/velero
      Thanks,
      Venkat

  • @tekconstructors
    @tekconstructors 3 года назад +1

    so "figlet" made it as a banner app. Why did "banner" not make it? Just curious.

  • @lazybongguy
    @lazybongguy 5 лет назад +1

    Hey Venkat, great video. If you can also make a video on Prometheues-operator. Also also show how to add scrape targets in both cases, Prometheus (static and service discovery) and Prometheus operator (service monitors) and also if possible how to modify/add alerting rules in prometheus.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Ashish, thanks for watching this video. I will try my best to do it. Thanks.

  • @HeyMani92
    @HeyMani92 4 года назад +1

    Hi Venkat, I am not able to create NFS Volumes on ubuntu 16.04. Could you please reply me with this

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi, thanks for watching. Where are you stuck? Could you give more details?

    • @HeyMani92
      @HeyMani92 4 года назад

      ​@@justmeandopensource In the background persistent volumes not created due to NFS Configuration not properly set and already I'm going through your dynamically nfs provision but still I got the issue and when I mount this path to another server:
      This is the command : mount -t nfs :/srv/nfs/kubedata /mnt

  • @claudiogarcia7557
    @claudiogarcia7557 3 года назад +1

    Hello Venkat, excellent tutorial videos as usual, Venkat can you make a video teaching Prometheus Cortex and S3,? thanks a lot Mister

  • @manikandans8808
    @manikandans8808 5 лет назад +1

    Hi Venkat. It works superb but I can't input the values file. I tried it many times but it's not taking the parameters from it. For grafana I does not claim the persistent volume. The pods get deployed with the default configuration.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Mani,
      Thanks for watching this video.
      So your issue is that you can't install prometheus using helm with custom values file?
      What error do you see? Or is it completely ignoring the changes in your values file and deploys with default configuration?
      The values file you download for prometheus is a huge one and it has many configuration options for lots of components. If you had incorrectly updated a different component in the file, you wouldn't see it when deployed.
      So please pay attention when you are editing the values file.
      Thanks,
      Venkat

    • @manikandans8808
      @manikandans8808 5 лет назад +1

      @@justmeandopensource it's ignoring the values file. For grafana I tried many times but it's not working.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@manikandans8808
      I would suggest you to make one change at a time and see if it has worked.
      I think the changes I made are to the service (type from LoadBalancer to NodePort and NodePort value) and Persistent volume size. Make sure to edit the right section. Play with it and I'm sure eventually you will get it working.
      Do "helm delete prometheus --purge" and between testing.

  • @vykuntarao7179
    @vykuntarao7179 4 года назад +1

    Which terminal you are using?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi, thanks for watching. I used gnome terminal with zsh and with some plugins.
      I have explained my terminal setup in the below video.
      ruclips.net/video/soAwUq2cQHQ/видео.html
      Cheers.

  • @apitest6274
    @apitest6274 4 года назад

    Hi thanks for your great tutorial. I'm facing a problem that in 14:07 after I use "helm install ..." my pod/prometheus-alertmanager and pod/prometheus-server just pending and can't start. Do you have any idea?

    • @apitest6274
      @apitest6274 4 года назад

      Ah I found that I didn't install the NFS Server so that's why

    • @justmeandopensource
      @justmeandopensource  4 года назад

      @@apitest6274 Thanks for watching and glad that you managed to resolve the issue. Cheers.

  • @avinashnarisetty7923
    @avinashnarisetty7923 5 лет назад +1

    Hello Venkat i have followed your vedio but i got struck in nfs-dynamic provissioner.I couldn't get it could you please help me

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Avinash, thanks for watching. What error you get exactly? I have done separate videos on NFS provisioning.
      ruclips.net/video/AavnQzWDTEk/видео.html
      ruclips.net/video/to14wmNmRCI/видео.html
      You will have to first make sure that your worker nodes can mount the nfs share successfully.
      Cheers.

  • @joshuawilliams9518
    @joshuawilliams9518 5 лет назад +1

    Nice Work... i want to ask a question. What do i need to change in the prometheus values if i want to use Ingress

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      Hi Joshua,
      Thanks for watching this video.
      In this video, I exposed the Prometheus service as a NodePort service. You can also use ingress which is a bit involving.
      First get ingress controller deployed in your cluster with haproxy for proxying the requests to worker nodes. I am not sure if you have watched my Nginx ingress video. If not, please watch and follow all the steps in the video.
      ruclips.net/video/chwofyGr80c/видео.html
      Now you have an ingress controller in your cluster.
      In prometheus.values file, leave the type of service to ClusterIP. Don't change it to nodeport like what I did.
      Then, under Prometheus server section, enable the ingress.
      ingress:
      ## If true, Prometheus server Ingress will be created
      ##
      enabled: true
      Then, few lines down, set the dns name you want to use to access your application. Below I have used prometheus.example.com
      ## Prometheus server Ingress hostnames with optional path
      ## Must be provided if Ingress is enabled
      ##
      hosts:
      - prometheus.example.com
      Now install prometheus with this values file as usual. This will create ingress resource for you automatically.
      $ kubectl -n prometheus get ingress prometheus-server
      Now you need to add an entry to /etc/hosts file on your local workstation.
      prometheus.example.com
      Now when you visit prometheus.example.com, it will hit your haproxy which will forward the requests to one of your worker nodes and the ingress controller running on that node will forward the request to the prometheus service.
      Hope this makes sense.
      Thanks

  • @pcsridharbe
    @pcsridharbe Год назад

    Hi Venkat , There is no init in helm version 3 . can you guide us how to install tiller using helm version 3

  • @vedicbhakt
    @vedicbhakt 4 года назад +1

    Hi Venkat, I really appreciate your efforts for your videos specially for bare metal cluster. I have Zabbix already running on my on-premise server and a Kubernetes cluster which is also running on-premise. Now I want to integrate my existing Zabbix with my K8S cluster. Please let me know about its feasibility ?
    Thanks in advance.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi, Thanks for watching. I have no experience using Zabbix monitoring. I am afraid I will have to spend some time exploring options. Cheers.

  • @sohailahmedeasygoing
    @sohailahmedeasygoing 5 лет назад +1

    Explained very well. Thank you very much

  • @1sbollap
    @1sbollap 5 лет назад +2

    can you please tell me your github url so i can download the k8s resources

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi, thanks for watching this video. The git hub url for my kubernetes repo is in the description.
      Https://github.com/justmeandopensource/kubernetes
      Thanks,
      Venkat

  • @sudheshpn
    @sudheshpn 5 лет назад +1

    I get the below error while deploying prometheus. i created a Cluster role and Cluster rolebinding for my monitoring service account.
    helm install stable/prometheus --name prometheus --values values.yaml --namespace monitoring --tiller-namespace monitoring
    Error: release prometheus failed: namespaces "monitoring" is forbidden: User "system:serviceaccount:monitoring:tiller" cannot get resource "namespaces" in API group "" in the namespace "monitoring"

    • @sudheshpn
      @sudheshpn 5 лет назад +1

      Got resolved by adding cluster-admin role
      apiVersion: rbac.authorization.k8s.io/v1beta1
      kind: ClusterRoleBinding
      metadata:
      creationTimestamp: null
      name: prometheus-monitoring
      roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
      subjects:
      - kind: ServiceAccount
      name: tiller
      namespace: monitoring

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Sudhesh, thanks for watching this video.
      By default when you initialize helm, it will create the service account tiller and will deploy the tiller component in the kube-system namespace.
      I see you have deployed tiller in a separate namespace called "monitoring". First step in installing tiller component in your cluster is to create a service account and give it cluster admin role so that the tiller component can deploy resources using helm.
      May I know how you created this tiller service account and how you deployed the clusterrole and clusterrolebinding?
      Thanks.

    • @sudheshpn
      @sudheshpn 5 лет назад

      @@justmeandopensource I created a ServiceAccount(tiller) in my monitoring namespace. I attached the ClusterRoleBinding(cluster-admin) which is my monitoring namespace to my tiller Service account.
      I initilized the tiller using --tiller-namespace monitoring. Is it the right way we need to do it in production?
      apiVersion: v1
      kind: Namespace
      metadata:
      creationTimestamp: null
      name: monitoring
      spec: {}
      status: {}
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
      creationTimestamp: null
      name: tiller
      namespace: monitoring
      ---
      apiVersion: rbac.authorization.k8s.io/v1beta1
      kind: ClusterRoleBinding
      metadata:
      creationTimestamp: null
      name: prometheus-monitoring
      roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
      subjects:
      - kind: ServiceAccount
      name: tiller
      namespace: monitoring

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Cool. Glad that you got it resolved. Good work.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Nothing wrong in creating tiller service account in a separate namespace. As long as the service account has the clusterrole of cluster admin and corresponding clusterrolebinding, it should work. Thanks.

  • @sudheshpn
    @sudheshpn 5 лет назад +1

    Hi Venkat. I see the prometheus-alertmanager pod unable to mount the mount path inside the pod. pv,pvc's are in bound state,nfs provioner is running too. I try to create a sample pod with persistentVolumeClaim name same as the one created in my namespace but see the same exception? Is it somekind of bug?
    Warning FailedMount 4m47s kubelet, k8s-slave MountVolume.SetUp failed for volume "pvc-8e04833e-9746-11e9-9001-42010a800004" : mount failed: exit status 32
    Mounting command: systemd-run
    Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/8e3eab60-9746-11e9-9001-42010a800004/volumes/kubernetes.io~nfs/pvc-8e04833e-9746-11e9-9001-42010a800004 --scope -- mount -t nfs 10.128.0.5:/m
    onitoring/monitoring-prometheus-alertmanager-pvc-8e04833e-9746-11e9-9001-42010a800004 /var/lib/kubelet/pods/8e3eab60-9746-11e9-9001-42010a800004/volumes/kubernetes.io~nfs/pvc-8e04833e-9746-11e9-9001-42010a800004
    Output: Running scope as unit run-r3b91a34e6f6b4e9ca46ba3bd0e51abb2.scope

    • @sudheshpn
      @sudheshpn 5 лет назад +1

      from master and slave node i am able to successfully mount the nfs mount point

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      What version of Kubernetes cluster you are running?

    • @sudheshpn
      @sudheshpn 5 лет назад

      @@justmeandopensource v1.13.4 running on gcp. Do i need to pass --cloud-provider i my kublet configuration file?

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      All my videos were done on bare-metal (on-prem). I haven't tested it on any cloud platforms. If you are using a cloud provider, its easier to use their persistent volumes (gcePersistentDisk) instead of nfs-provisioner. It could be anything like pod networks. I am afraid, I have little experience playing with instances in Google cloud.
      Could you post the full output (kubectl describe) of the sample pod with pvc? If its large, you can past it in pastebin.com
      Thanks,

    • @sudheshpn
      @sudheshpn 5 лет назад +1

      @@justmeandopensource Issue is resolved by setting nfs-client-root mountPath to /persistentvolumes which is the default setting in deployment.yaml.

  • @MudassirAlics
    @MudassirAlics 5 лет назад +1

    Hello,
    are the steps shown in this video applies to Azure Kubernetes Services as well? How similar or different is it compared to what you have shown ?
    Thanks

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      Hi Mudassir, thanks for watching this video.
      Yes you can follow the same steps. But since you are using a managed Kuberenetes solution with Azure, you don't have to set up dynamic nfs provisioning for volumes. You can use AzureDisk as persistent volumes. You will have to create storageclass to use Azuredisk. I then exposed Grafana as a NodePort service. You can also do that. You will need to allow the nodeport in the firewall for the worker nodes. Or you can just easily use LoadBalancer type service since you are in the cloud. Otherwise the steps are all the same. Thanks.

  • @_siva_polisetty
    @_siva_polisetty 5 лет назад +1

    Hi Venkat, actually I have few doubts in service accounts, helm and tiller.
    1. How do we know we need to create a service account with specific name, example in this case it is nfs-client-provisioner or helm case service account with is tiller how to get know that.
    2. I watched your helm video, there you mentioned when you run helm install Jenkins it will check the cluster name from .kube/config file and deployed there, but what if I have multi cluster config file how to specific cluster and deploy. Could you please help me with this.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Siva, I came to know that we need to create service accounts by reading the documentation. Helm documentation talks about tiller service account.
      If you have multiple clusters configured in your kube config file, you will have to choose the context (cluster and namespace) before using helm from the command line. You can check kubectl command to see which cluster you are connected and then use helm. Or you can have multiple kube config file one for each cluster and export KUBECONFIG environment variable.
      Thanks

  • @pradeepbhuyan
    @pradeepbhuyan 4 года назад +2

    Its very nice tutorial of using helm prometheus and grafana. Do you have any pdf documentation or git link then i can do prctice. Please provide any link to practice this lab.thanks again.

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Pradeep, thanks for watching. I don't have any documentation for this video. But generally you find most of the stuff in my Github Repo github.com/justmeandopensource/kubernetes

  • @Channel_test12
    @Channel_test12 5 лет назад +2

    Thank you so much for sharing this !.. Can u please also do a session on prometheus alertmanager and its integration with slack. I am using helm to install stable/prometheus-operator , not getting how to update the rules for prometheus alertmanager also if possible how to trigger alerts only for few alerts .. Thanks!..

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Pooja, thanks for watching this video. I will try and play with these concepts and if I get anywhere, I will definitely do a video on it.

  • @MyYuichan
    @MyYuichan 4 года назад +1

    Hi venkat, can u explain me how we can access the grafana not from the node host, but from my server?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Shidiq, thanks for watching. The usual way would be to expose the grafana service as NodePort or LoadBalancer or use ingress to access it. If you want to access it from your machine, you can use kubectl port-forward command.

  • @amulraj0
    @amulraj0 5 лет назад +1

    Did anyone have the issue adding prometheus datasource into grafana? . I see this error in developer console - "has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource". Else, can someone provide me the working version of grafana and prometheus?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Amul, thanks for watching. I will try this video in my environment and let you know if anything has changed. Cheers.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Amul, I just tested this video step by step and all working exactly as shown in this video.

    • @amulraj0
      @amulraj0 5 лет назад

      @@justmeandopensource Hi Venkat, thanks for checking for me. I tried the whole thing again and strangely getting the same error while adding the prometheus data source into grafana. The error on the granfana UI is "Cannot read property 'status' of undefined" and the chrome developer console shows "GET 172.42.42.102:32322/api/v1/query?query=1%2B1&time=1573167664.312 net::ERR_ABORTED 404 (Not Found). Access to XMLHttpRequest at '172.42.42.102:32322/api/v1/query?query=1%2B1&time=1573167664.312' from origin '172.42.42.102:32323' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource." But, never mind, I will try with web developers locally here.. :-)

  • @homefootage1
    @homefootage1 5 лет назад +2

    Hi Ventak, thanks for the video, it was very helpfull like usual 👍🏻
    The Prometheus installation worked perfect, but I'm facing a issue to install Grafana. I'm getting the error below after deployment :
    Pod Status: Init:CrashLoopBackOff
    It is something related with the PersistentVolume, do you have any idea Sr? Thanks

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      It can't be persistent volume related. If it can't get a volume, the grafana pod will be in pending state.
      May I know what version of Kubernetes and helm you are using? And what sort dynamic storage provisioning you are using? I will just do a test run of this video in my environment tomorrow and let you know. Meanwhile can you try if its working without persistent volumes? Thanks.

    • @homefootage1
      @homefootage1 5 лет назад +1

      @@justmeandopensource it is running fine without persistent. I'm using helm 2.14.13 and Kubernetes 1.15.3. The pv is working fine with Prometheus, the issue is related with grafana x pv

    • @homefootage1
      @homefootage1 5 лет назад +1

      @@justmeandopensource I have followed your video to setup NFS and provisioned the pv, it is running fine with Prometheus

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@homefootage1 I just tested this video completely now and everything is working perfectly fine. Please check the below pastebin link for my testing.
      pastebin.com/LMWhMLNA

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Did you edit the grafana.values and enabled persistent volume? It was set to false by default.

  • @alexanderhill2915
    @alexanderhill2915 5 лет назад +2

    Hello Venkat, everything seem to have went well, all my prometheus resources are up and i can access the prometheus dashboard. But it doesn't seem to get any metrics. Any ideas

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Alexander, is it only few metrics that are not showing? Or you don't see any metrics at all?
      Thanks.

    • @alexanderhill2915
      @alexanderhill2915 5 лет назад +1

      @@justmeandopensource Sorry for the late reply, didn't see that you replied. I don't see any metrics at all.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@alexanderhill2915 Thanks for the comment. Its been recorded a while ago, so I need to run through this again in my environment and see if this video is still relevant or if it needs any tweaks. I will test it out today and get back to you. Thanks.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Alexander, I just followed this video step by step and I got everything as exactly shown in this video.
      * Deployed Kubernetes cluster with 1 master and 2 worker nodes
      * Deployed Helm & Tiller
      * Deployed dynamic NFS provisioner
      * Installed Prometheus using helm chart
      * Installed Grafana using helm chart
      * Added the Grafana Dashboard (ID: 8588)
      I can see cpu, memory and network utilization in Grafana. The only metrics I couldn't see is Disk I/O.
      Could you please double check each steps to find out exactly where the problem is? Can you check if you see the metrics in prometheus and whether its not showing up only in Grafana?
      Thanks

    • @alexanderhill2915
      @alexanderhill2915 5 лет назад +1

      @@justmeandopensource ok let me start again from scratch just incase i missed a step

  • @ratnakarreddy1627
    @ratnakarreddy1627 5 лет назад +1

    Hello Venkat,
    Do PVC create if StorageClass was created in a different namespace?

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      I think that shouldn't be a problem, although I haven't tested. To give you a solid answer, I am going to test that now. Creating a storage class in default namespace and try to create a pvc in a different namespace.
      Will get back to you shortly.
      Thanks

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Ratnakar, I just verified and it doesn't affect where you create your pvc. StorageClass are global in cluster.
      I just followed my dynamic nfs provisioner video (ruclips.net/video/AavnQzWDTEk/видео.html) and then created a pvc in a different namespace. Persistent volume got created automatically without any problem.
      Hope this clarifies your doubts.
      Thanks.

    • @ratnakarreddy1627
      @ratnakarreddy1627 5 лет назад +1

      Hello Venkat,
      I have made some mistakes while creating PVC, due to that I was not able to create a PVC in Prometheus namespace.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      @@ratnakarreddy1627 Cool.

  • @Canada_couple_vlogs
    @Canada_couple_vlogs 2 года назад

    Hi, thanks for the informative video, can we monitor websites attached to Kubernetes in Grafana

  • @hereforyouwhat
    @hereforyouwhat 2 года назад

    Hi
    What is difference Prom+Grafana vs Kubernetes Dashboard if we want to monitor k8s cluster?
    Or in which scenarios Prom+Grafana should be used over Kubernetes Dashboard

  • @oraculotube
    @oraculotube 2 года назад

    how can we send all the k8s metrics to an external ubuntu/prometheus/grafana instance?

  • @ramallways6321
    @ramallways6321 Год назад

    Hi bro...I've tried the ingress rule as path based routing rather than hostname based, while checking in the grafana, I've seen only the ingress of hostname based routing for that controller which is exposing /metrics to prometheus.
    I couldn't see the path based ingress file metrics...Can you give the idea about this...? I need to see path based routing metrics also...

  • @imamrisnandar4572
    @imamrisnandar4572 5 лет назад +1

    hi venkat, thanks for an awesome video. im trying install but on alertmanager pods status CrashLoopBackOff. what i have to check ? and if alertmanaget not installed what happen?

    • @imamrisnandar4572
      @imamrisnandar4572 5 лет назад +1

      hi venkat, sorry add one question, on grafana data source prometheus, wheteher we can add url with ingress connection ?
      my connection both using ingress, but not connected. many thanks.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@imamrisnandar4572 I just tried following this video in my local environment and it worked exactly as shown in this video. Alertmanager pod was running fine. Alertmanager service is a separate microservice which sends notifcations on alerts that you define in prometheus. In Grafana, you can add ingress for your prometheus data source. But first check whether you can access the prometheus using ingress from your browser. I used nodeport in this video. Cheers.

    • @imamrisnandar4572
      @imamrisnandar4572 5 лет назад +1

      Hi venkat, thanks for your reply. Im using PKS, and the cluster can't direct connect to internet, so images should be pushed to our registry (harbor) and edit the values image to my registry (using command show values in helm3). That's my case. But the status pods alert manager still Crashloopbackoff.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@imamrisnandar4572 Hmm. Did you find anything in the logs for alertmanager pod?

    • @imamrisnandar4572
      @imamrisnandar4572 5 лет назад

      @@justmeandopensource my cli: kubectl -n prometheus logs prometheus-alertmanager-58d77b6cfb-dxkm9 -c prometheus-alertmanager.
      error : level=error ts=2019-12-03T15:52:29.954Z caller=main.go:353 msg="failed to determine external URL" err="\"/\": invalid \"\" scheme, only 'http' and 'https' are supported"
      the rest level are info only

  • @shubhamagarwal5566
    @shubhamagarwal5566 4 года назад +2

    Hi Venkat, all the tutorials are brilliantly explained. I was wondering how to setup the alert manager as now it is just showing no alert groups found
    Thanks

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Shubham, thanks for watching. I haven't gone in depth into configuring Alerts. You will have to setup rules in prometheus and then point it to the alert manager configuration. You will have to configure alert manager as well as to how it alerts. If I get a chance, i will explore this. Cheers.

  • @ratnakarreddy1627
    @ratnakarreddy1627 5 лет назад +1

    Hello Venkat,
    Could you please tell me in which file we need to make changes if we would like to report alerts(ex in email)?

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Ratnakar, I haven't explored alerting feature in Prometheus. Its done by configuring AlertManager and configuring Prometheus to send events to AlertManager. You can check the below link for more details.
      prometheus.io/docs/alerting/overview/
      And thanks for suggesting this topic. I will add it to my list and have a play with it. Possibly will post a video later. I have videos scheduled for the next 4 weeks. It will be after a month if I record a video.
      Thanks.

  • @nareshpandian1321
    @nareshpandian1321 5 лет назад +1

    Hi Venkat .
    I hope rancher is better than Prometheus and Grafana because rancher can be to do all the activities in cluster . I seen your rancher videos also that’s one is fantastic and it’s also monitoring all the events.
    Which one is better ?
    I guess its rancher (dashboard ) . Please correct me if I’m wrong

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Naresh, thanks for watching this video. Actually Rancher and Prometheus/Grafana are meant for two different purpose. You can't compare one over the other. Rancher is for managing your cluster or many clusters in a single interface. You can create resources in your cluster. Prometheus is a metrics engine that scrapes various metrics from different resources in your cluster. Grafana is for visualizing the metrics and pulls the data stored in Prometheus. You see cpu, mem, io, network utilization and various application specific metrics are monitored.
      You have limited monitoring capability in Rancher. And in Prometheus/Grafana, you can create any resource in your cluster like Deployment, Statefulset.
      Hope you understood the fundamental difference. I would use both these tools.
      Thanks.

  • @1sbollap
    @1sbollap 5 лет назад +1

    if i change the namespace from default to something, then I am seeing an error message: "Error creating: pods "nfs-client-provisioner-67679d4fff-" is forbidden: error looking up service account default/nfs-client-provisioner: serviceaccount "nfs-client-provisioner" not found
    .and Pod is not starting. So, I am wondering if this solutions only works with default namespace? If i want to use this, does the nfs-client pod need to be in the same namespace as all my other pods?
    "

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      That shouldn't be a problem. You can deploy nfs-client-provisioner in any namespace and any deployment in any namespace can make use of it. When creating persistent volume claim, you refer to it by storage class. Having said that, I haven't tried cross namespace persistent volumes.
      Thanks,
      Venkat

  • @pallavladekar4466
    @pallavladekar4466 5 лет назад +1

    hey hi,
    getting below error on grafana's dashboard "Failed to create dashboard model p.a is not a constructor" finding the solution but not succeeded.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Pallav,
      Thanks for watching this video.
      I just ran through this complete video myself following each step. its working perfectly fine for me, although Grafana UI has changed slightly now compared to when I recorded this video.
      May I know where exactly you get the problem? Could you give me more details on your issue please. I will see if I am able to reproduce it.
      Thanks,
      Venkat

    • @pallavladekar4466
      @pallavladekar4466 5 лет назад +1

      @@justmeandopensource Hi Venkat,
      Thanks for such quick response, there was an issue with grafana helm chart version. i was using chart -v: 2.3.1 & App-V: 6.0.2 . but now tried with the same version that u are using. and it's working. (but it works without persistent volume).
      with persistent volume, I am getting an error "chown: changing ownership of '/var/lib/grafana': Operation not permitted" in grafana's pod logs.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Pallav,
      I think it depends on how you configured your dynamic persistent volume. Did you use dynamic nfs provisioner similar to what I did in the video? If so, did you follow all the steps like exporting the nfs share with correct options and setting the share ownership to nfsnobody:nfsnobody?
      Thanks,
      Venkat

    • @pallavladekar4466
      @pallavladekar4466 5 лет назад +1

      thank u very much Venkat, I had missed with some NFS option in /etc/export file. it's working now.
      great video.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Cool.

  • @OhDearBabajan
    @OhDearBabajan 4 года назад +1

    great video! thank you for putting it together. Just one question. Would you say one NFS server for persistent storage is sufficient enough to handle all the writes? Does it cache fast enough for the entire prometheus cluster? I'm sure it's a good start nonetheless. Also now that helm 3 doesn't support/have tiller, does this demo still work?

    • @justmeandopensource
      @justmeandopensource  4 года назад +3

      Hi Dimitri, thanks for watching. This was just for demonstration purposes. For production usecase you would use something more robust and concrete.

    • @OhDearBabajan
      @OhDearBabajan 4 года назад +1

      @@justmeandopensource Got it! So from a Kubernetes standpoint, would that entail multiple NFS servers?

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      @@OhDearBabajan You can use any number of nfs servers as your backend storage.

  • @anithak1585
    @anithak1585 4 года назад +1

    Hi Venkat , Session is clear and good ..I have few doubts on exposing nodeports to prometheus .
    Have modified the nodeport and IP in service yaml or the default configuration.

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      HI Anitha, thanks for watching. Can you please explain your question in a bit more detail?

  • @leagueoflegendswildriftnep2236
    @leagueoflegendswildriftnep2236 3 года назад

    im having hardtime for setting up email notifications, I tried to go inside pod but I do not have permission to edit /usr/share/grafana

  • @sherifakmal1108
    @sherifakmal1108 3 года назад

    HI venkat.while installing helm prometheus-operator - kube-state-metrics pod not running. error showing the error was readiness and liveness connection refused. unhealthy so pods are not creating. please help me

  • @cenubit
    @cenubit 5 лет назад +1

    how to send kubernetes metrics to remote standalone prometheus?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      Hi Girts, thanks for watching this video. I haven't tried connecting k8s cluster to external prometheus server yet. But it looks interesting. I will explore the options and if I get anywhere with it, I will record a video. Cheers.

  • @piby1802
    @piby1802 4 года назад +1

    Hi Venkat!
    Your videos are of great help. Thank your for putting so much effort for us :)
    I am particularly struggling with the storage aspect of kubernetes these days.
    I am running my clusing on virtualbox using vagrant. I tried to use virtualbox synced folders for storing data but since they don't support fsync on synced folders many applications like mongodb don't run properly.
    I finally resorted to attaching vdi disks in my vagrant files and running ceph on k8s using rook.
    I am currently using rook and ceph for block and (S3 like) object storage.
    Would be great if we can get a video on rook and ceph and comparison of ceph, nfs and edgefs ^^
    Thx!

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi piby, thanks for watching. I had been using NFS solution for dynamic volume provisioning. And since many viewers asked for Gluster FS, I started a different series to cover basic Gluster FS concepts. I will soon be recording videos for k8s with gluster as storage backend.
      ruclips.net/p/PL34sAs7_26wOwCvth-EjdgGsHpTwbulWq
      Few users also asked about ceph/rook which I am yet to explore. Will definitely do videos on that as well. Cheers.

  • @kasimshaik
    @kasimshaik 5 лет назад +1

    Hi Venkat, Could you create video on pod security policy. I have been studying PSP ( pod security policy ), need to clarify few queries on that.

    • @kasimshaik
      @kasimshaik 5 лет назад +1

      Do you have experise in that area PSP ?

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Kasim, Thanks for watching this video. I haven't looked at pod security policy yet. But I should be able to test it on my test cluster. Let me know what you query is. Thanks

    • @kasimshaik
      @kasimshaik 5 лет назад

      @@justmeandopensource May I have your personal e-mail address. So that, I can send e-mail with complete details what have had tried with PSP. Here is my ID kasim123@gamil.com

  • @nah0221
    @nah0221 4 года назад +1

    brilliant ... thanks Venkat !

  • @imranrazakhan2569
    @imranrazakhan2569 4 года назад +1

    In grafana datsource why you used Prometheus nodeport as both are on cluster, you may use Prometheus ClusterIP and don't expose prometheus

    • @justmeandopensource
      @justmeandopensource  4 года назад +3

      Hi Imran, thanks for watching. Yes you are right. We could have used clusterIP which is sufficient. People also directly query Prometheus so thought of exposing that too.

  • @Oswee
    @Oswee 5 лет назад +1

    Would be really great to see HAproxy setup and good practices. I tried Traefik. It works great and setup went smoothly but it is just L3 proxy. Want to try HAproxy.

    • @yiphui9684
      @yiphui9684 5 лет назад +2

      ingress-nginx-controller is great

    • @gouterelo
      @gouterelo 5 лет назад +1

      @@yiphui9684 with MetalLB its a life saver usin ssl/tls in controller !

  • @ramdesi1
    @ramdesi1 4 года назад +1

    Hi Venkat, Thanks for the detailed explanation. I have a question here. We have many AKS clusters in our environment. Should i install Prometheus & Grafana on each cluster and maintain multiple Grafana console? Also, wanted to know about the security related concerns.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Ramanan, thanks for watching. You can take either of the two approaches. Each has its own benefits.
      Either you can install the monitoring stack prometheus/grafana on each cluster or you can have a separate cluster or a central monitoring infrastructure and collect metrics from all your clusters so that you have a single place to go to.

  • @vandanasharma8461
    @vandanasharma8461 5 лет назад

    Hi Thanks for this video... Do you have any reference for alert systemc from prometheus or grafana.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Vandana, thanks for watching. You can use AlertManager which is a separate component.
      prometheus.io/docs/alerting/overview/

  • @Pallikkoodam94
    @Pallikkoodam94 5 лет назад

    Hi Venkat,
    Thank you for sharing this video,
    It would be great if you could provide the details/technical connections of how Prometheus namespace gettings the values of other namespaces.
    Thank you,

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Ajeesh, thanks for watching this video. Most of my videos are like getting started videos and I would have touched the basics. As I am covering a breadth of different technologies, I couldn't go too deep into any one topic. I will see If I can do a video on this detail. Cheers.

  • @srujanareddy8623
    @srujanareddy8623 3 года назад

    Hi Venkat, I'm Fresher new to DevOps. I have one doubt what is different between (Prometheus, Grafana) and ELK. Both tools we are using for monitoring purpose. What is different between monitoring and logging? Can you please help me on this.

  • @1sbollap
    @1sbollap 5 лет назад +1

    after i ran helm install stable/prometheus --name prometheus --values prometheus.values --namespace prometheus
    I am seeing this error. "pod has unbound immediate PersistentVolumeClaims (repeated 3 times)" any ideas why? Answer: It worked after i deployed the nfs-povisoner to the default namspace

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      This means that the cluster couldn't provision persistent volume for the pvc defined in the helm chart. Are you using any form of dynamic volume provisioning?

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      That makes sense.

  • @vudinhdai2638
    @vudinhdai2638 5 лет назад +1

    when i ran command: helm install stable/prometheus.... I got an error: chart incompatible with tiller v2.14.0-rc.2. Please, help me to fix this problem :(((

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi, thanks for watching this video. Please try using latest stable version of helm and not pre-release, alpha or beta release. Thanks

    • @vudinhdai2638
      @vudinhdai2638 5 лет назад +1

      i followed your previous video about getting started with helm, and i installed from Binary Releases, how can i choose stable version of helm. I can't see any version, just tar file and use?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@vudinhdai2638 Hi, go to official releases page using the below link.
      github.com/helm/helm/releases
      In that page, I can see 2.14.0-rc2, 2.14.0-rc1 which are pre-releases. Ignore those and download 2.13.1 which is the latest verified release.
      Thanks,
      Venkat

    • @vudinhdai2638
      @vudinhdai2638 5 лет назад +1

      oh! thank you so much!

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      @@vudinhdai2638 You are welcome.

  • @pankajmahto2370
    @pankajmahto2370 4 года назад

    Hi Venkat,
    Please can you suggest below error, I am getting below error while running below command.
    I am using HELM3 version (v3.2.1+gfe51cd1) for installation.
    helm3 install stable/prometheus prometheus --values /tmp/prometheus.values --namespace prometheus
    Error: failed to download "prometheus" (hint: running `helm repo update` may help)
    I try tried hint as well still unable to install.
    Thanks In Advance :)

    • @pankajmahto2370
      @pankajmahto2370 4 года назад

      Hi friend, Please can someone help on this error, I got stuck in this step. Try searching in google for help still unable to find the cause of the error. I am new to Kubernetes unable to resolve this alone. Please help.

    • @justmeandopensource
      @justmeandopensource  4 года назад

      @@pankajmahto2370 Thanks for watching. I think I spotted the problem. The way you specified the chart and the name is wrong.
      What you have is,
      $ helm3 install stable/prometheus prometheus --values /tmp/prometheus.values --namespace prometheus
      Try this instead,
      $ helm3 install prometheus stable/prometheus --values /tmp/prometheus.values --namespace prometheus
      I hope you have the stable repo in your helm installation. If not run the below commands first.
      $ helm repo add stable kubernetes-charts.storage.googleapis.com
      $ helm repo update

    • @shoryasingh6566
      @shoryasingh6566 3 года назад

      @@justmeandopensource Hi Venkat, I am also using helm 3.6.0 even though I try following the above comments it still shows error stating
      Error: failed to download "stable/prometheus" (hint: running `helm repo update` may help)

  • @rajeevghosh2000
    @rajeevghosh2000 3 года назад

    Thanks for the great Video Venkat. can i ask a quick question. Does prometheus pulls the metrics directly from the containers/pods ? it doesnot need cadvisor ? Also, if I understood correctly, prometheus pulls the metrics using http . Does it mean every container should listen to HTTP requests from prometheus ?

  • @StefanoCiccolini
    @StefanoCiccolini 4 года назад +1

    hello, congratulations on the explanation.
    I wanted to ask you why while installing prometheus by launching the file.values releases me this error?
    Error: error unmarshaling JSON: json: cannot unmarshal string into Go value of type map[string]interface
    I'm doing it all with PowerShell on windows 10.
    Thank you

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Stefano, thanks for watching. I believe there is something wrong with your values file when you updated it. Wrong format values or could be indentation error. Try pulling the values file and installing without any changes and then start doing the change one by one to find out which line in values file is causing this issue.

  • @kasimshaik
    @kasimshaik 5 лет назад

    Hi Venkat, do we really need PV volumes for Prometheus ? Can we opt for hoaspath option instead of PV ? I just wanted to clarify the query. I have not tried it

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      Hi Kasim,
      Thanks for watching this video.
      Yes you need some form of persistent storage for Prometheus pod. Prometheus stores all the metrics collected from various services. If the prometheus pod crashes and restarts, and if you don't have persistent volume enabled, then you will lose all the previously collected metrics.
      You can use hostPath, and it will be fine as long as the prometheus pod runs on that worker node. What if the pod crashes and gets started on another worker node? Hostpath is tied to a particular host. If you really want to use hostpath, then you need to make sure that the prometheus pod always gets started on the same node. You can do this by defining nodeselector.
      Thanks.

    • @kasimshaik
      @kasimshaik 5 лет назад

      @@justmeandopensourceHi Venkat, we have NFS mounth path mounted across on all worker nodes. We have using using nfsclient option for sharing confimap files.

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      In that case, it should be fine to use hostPath. If that directory defined in hostPath can be mounted on all worker nodes, then abosulute no problem with using that. Cheers.

  • @lemont9061
    @lemont9061 4 года назад +1

    Pls do video with AppD

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Sangeeta, thanks for watching. I will see if I can look into that. I primarily focus on open source products though.

    • @lemont9061
      @lemont9061 4 года назад

      Thnks for reply.. can u suggest gemfire (caching) learning video..

  • @rahulmalgujar1110
    @rahulmalgujar1110 5 лет назад +1

    thanks for the vedio, I am trying to init helm but getting this error "Error: unknown flag: --service-account" why so?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Rahul, Thanks for watching. You are using Helm v3. In this video, I used helm v2.14. For Helm v3, there is no tiller component to be deployed in your cluster. All you need is helm binary. Cheers.

    • @rahulmalgujar1110
      @rahulmalgujar1110 5 лет назад +1

      @@justmeandopensource thanks for your reply. I also read in one of the document, but when I try to install any thing using helm it says failed to download not know y it is saying that.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@rahulmalgujar1110 With Helm v3, there won't be any default repositories. You will have to add a respository so that you can search and pull charts.
      Check if you have any repos enabled by running the below command.
      $ helm repo list
      I believe you don't have any repos. So add and update the repo with the below two commands.
      $ helm repo add stable kubernetes-charts.storage.googleapis.com
      $ helm repo update

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      If you are following this video for helm installation steps, bear in mind that the same command I showed in this video won't work with Helm v3. For example, I passed --namespace prometheus to helm install command which will automatically create a namespace. But with Helm v3, you will have to create the namespace manually before installing.
      Also --name option is deprecated.
      For example, in Helm v2
      helm install --name prometheus stable/prometheus
      whereas in Helmv v3
      helm install prometheus stable/prometheus

    • @rahulmalgujar1110
      @rahulmalgujar1110 5 лет назад +1

      @@justmeandopensource Thanks for reply, It worked for me.

  • @sebastienlevallois393
    @sebastienlevallois393 5 лет назад +1

    That s just great man, thank you so much

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      Hi Julien, thanks for watching and taking time to comment/appreciate. Cheers.

  • @pandu226
    @pandu226 4 года назад

    hi friend i'm following your video i'm getting below error for installing prometheus
    #helm install prometheus stable/prometheus --values /tmp/prometheus.values --namespace prometheus
    Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.securityContext): unknown field "runAsGroup" in io.k8s.api.core.v1.PodSecurityContext

    • @pandu226
      @pandu226 4 года назад

      i'm using helm version 3

    • @justmeandopensource
      @justmeandopensource  4 года назад

      @@pandu226 Hi thanks for watching. Are you using the latest chart version of Prometheus. Can you please check other chart versions for Prometheus to see if you get the same problem. I am trying to find out if the issue is cluster wide or just to the prometheus deployment.

  • @systemadministrator8192
    @systemadministrator8192 5 лет назад +1

    Hello Venkat, as usual good job. I get N/A for some metrics
    Can you recommend how to fix it ?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi. Thanks for watching this video. Where exactly you get that? I mean at which point in this video? And could you please tell me which metrics you don't get values for?
      Thanks

    • @full_hause_5993
      @full_hause_5993 5 лет назад +1

      @@justmeandopensource I used grafana 8588, hovewer I don't get Deployment memory, deployment CPU, used cores all have N/A. thanks

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      I will try dashboard 8588 later tomorrow and see if I get the same results. It could be that the dashboard's metrics query might be wrong.
      Have you tried looking at it after some time? Is it never getting updated? Thanks

    • @full_hause_5993
      @full_hause_5993 5 лет назад +1

      @@justmeandopensource Yes, we tried(

  • @vishalchauhan9342
    @vishalchauhan9342 4 года назад

    Pod scheduling is failing with following error :- pod has unbound immediate PersistentVolumeClaims

  • @knightrider6478
    @knightrider6478 5 лет назад +1

    Hi Venkat again :)
    How can i implement Prometheus and Grafana on my k8s cluster which is standalone based on 2 VPSs and also i have configured the HAproxy and Nginx Ingress?
    I have tried to enable the ingress resource from the prometheus values file but without success.Also i didn't change the the service to be nodePort as you did in the video because i have the HAproxy load balancer and Nginx ingress so i want to access it from internet like this: prometheus.my-domain.com.
    I'm not so fluid with the .values configurations on Helm charts.
    Can you clear my path with some advice?
    Nice video series!
    Also i would like to suggest you to make some videos on how to deploy k8s and do all the good stuff using VPSs.
    Thank you and best regards.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Knight, I personally don't enable ingress from the values file. I will leave that as false/disabled in values.yaml and configure ingress myself using Nginx ingress. Thanks.

  • @josephbatish9476
    @josephbatish9476 5 лет назад +1

    Good job mate

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Thanks for watching this video Joseph.

    • @josephbatish9476
      @josephbatish9476 5 лет назад +1

      @@justmeandopensource please do more videos about helm and kubernetes

    • @justmeandopensource
      @justmeandopensource  5 лет назад

      @@josephbatish9476 I have done a getting started video on helm in Kubernetes. Hope you already watched it. If not here is the link.
      ruclips.net/video/HTj3MMZE6zg/видео.html
      Thanks.

  • @vishalchauhan9342
    @vishalchauhan9342 4 года назад

    kubectl logs -f pod/prometheus-1595420753-server-68b899667b-b4275
    error: a container name must be specified for pod prometheus-1595420753-server-68b899667b-b4275, choose one of: [prometheus-server-configmap-reload prometheus-server]

  • @realthought2262
    @realthought2262 4 года назад

    hey , hope you are doing good , i was stuck in a problem and my grafana container was crashing again and again , i tried logs and then everything but failed then i started reading comments , there is something going on with NFS server (norootsquash) worked , i modified the /etc/export file and delete the helm chart and reinstall it , kaboom . I can see amazing dashboard Grafana.... thanks everybody !!

  • @adonaik8s
    @adonaik8s 5 лет назад +1

    nice

  • @boontootv
    @boontootv 4 года назад +2

    very good job. whatis the default root password

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Adel, thanks for watching. Default root password for my vagrant provisioned virtual machines are "kubeadmin"

    • @boontootv
      @boontootv 4 года назад +1

      @@justmeandopensource where did you configured this?

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      @@boontootv Are you asking about the virtual machines? It will be in the bootstrap.sh script that is used during vagrant provisioning.

    • @boontootv
      @boontootv 4 года назад +1

      @@justmeandopensource thank you. very good work

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@boontootv you are welcome.