Sir, Thank you once again for your invaluable assistance. The session was extremely beneficial. It would have been even more advantageous to see the lab executed in an AWS environment
Hello Buddy, Thank you for your support and feedback. As mentioned in the video, I have used all the opensource tools and their implementation is standard across the cloud platform.You can use the same steps to implement this project in Azure or AWS as well. Let me know if you are facing any issues, happy to help :)
Great Work --- It would be great to have a PLATFORM-OPS course. With tools like : Kubernetes, ELK, Prometheus Grafana, Troubleshooting, AWS, Splunk, Dynatrace.
Thank you for the suggestions. Currently I am working on the CKA series which will have 40+ videos, once I am done with that, I will try to create more videos on the topics you have suggested. Thank you once again.
Many enterprises uses managed version of Prometheus and Grafana so that they do not have to worry about security patching, upgrades, server maintenance and other administrative task. Google also has a managed offering called GMP ( Google Managed Prometheus). I hope that answered your question
Hi Piyush, I would like to know does the same works on EKS since you are gcp. Any reason why did you try the chart from bitnami not from official grafana repo i mean prometheus stack repo. It would be much appreciated if you add the video for enabling loki also on same cluster and get the log on grafana
Hello, The manifest that we use, its built for Kubernetes that means any kubernetes cluster running anywhere could be cloud GKE/AKS/EKS or on-premises. The application code could have some dependency issues but other than, manifests are standard and follow the same pattern. I'd add the loki integration to my todo list, will do it soon.
I've installed kube-prometheus-stack, it's successfully settled. Now I want to create alerts, do I need to configure my slack details in values.yaml file or else.
You need to follow the below steps: kubectl create secret generic alertmanager-slack-webhook --from-literal webhookURL=SLACK_WEBHOOK_URL kubectl apply -f extras/prometheus/oss/alertmanagerconfig.yaml
Hello, which cloud are you using? Adding the datasource is same for opensource and managed prometheus. Here's the Grafana installation guide: cloud.google.com/stackdriver/docs/managed-prometheus/query#grafana-deploy
If you setup in the cluster itself it be a better choice. If you do that in a VM, then it will be a single point of failure, if the instance goes down the prometheus will go down with it, issues with scaling and everything. Then you have to use an autoscaling group(in AWS) or MIG(in GCP) or VMSS(in Azure), it will be a costly and complex solution. That is why Kubernetes will be the right choice if you already have apps runnning on kubernetes.
Hi piyush I just completed all the tutorials regarding Azure Devops, and through udemy I completed all the courses regarding kuberntes too..I can able to clear interview but during the client round I faced a question about how many kuberntes clusters you are using in your current project.. Please provide a rough answer to these for future reference.
Hello Ravi, Thank you for reaching out. The number of kubernetes cluster you use is dependent on the number of projects you are handling and number of environments you have access to. For example, if you are an ops person, you would have the access to all the non-dev clusters. UAT, Prod, Pre-prod etc If you are a developer, you would only have access to the dev clusters. You can form your answer accordingly.
@@TechTutorialswithPiyush yeah Piyush...I don't have experience with kuberntes in real time, can I answer it as 4 clusters regarding the Dev environment.
Hey, What type of data are we talking about? you mean Node details? If we are using Kubernetes on our On premises server we get access to everything and we manage everything on our own including control plane., worker nodes, workloads etc. If you dont want to manage the control plane ourself, we use a managed Kubernetes service such as GKE standard which will give us access to our worker nodes and workloads however, if you dont want to manage your worker nodes, you use GKE Autopilot in which you can focus on your workload and control plane + worker nodes are managed by GKE. I hope this answered your question.
Thank you for reminding me, just added the link in the description. Here as well devo.hashnode.dev/comprehensive-aws-eks-cluster-monitoring-with-prometheus-grafanaand-efk-stack-10weeksofcloudops
Sir, Thank you once again for your invaluable assistance. The session was extremely beneficial. It would have been even more advantageous to see the lab executed in an AWS environment
Hello Buddy, Thank you for your support and feedback. As mentioned in the video, I have used all the opensource tools and their implementation is standard across the cloud platform.You can use the same steps to implement this project in Azure or AWS as well. Let me know if you are facing any issues, happy to help :)
Great Work --- It would be great to have a PLATFORM-OPS course. With tools like : Kubernetes, ELK, Prometheus Grafana, Troubleshooting, AWS, Splunk, Dynatrace.
Thank you for the suggestions. Currently I am working on the CKA series which will have 40+ videos, once I am done with that, I will try to create more videos on the topics you have suggested. Thank you once again.
continue sir❤❤
Yes brother, I will resume this series after 2 weeks
I like the way you teach. Do you have a full course on prometheus and Grafana?
Not yet, may be in future.
Dear Sir, Thanks a lot, next video please.
I am currently focusing on Azure DevOps series, will resume this one once I complet ethe Azure DevOps one.
can we use this method in production or in the company, any vulnerability issues will face in future ?
Many enterprises uses managed version of Prometheus and Grafana so that they do not have to worry about security patching, upgrades, server maintenance and other administrative task. Google also has a managed offering called GMP ( Google Managed Prometheus). I hope that answered your question
Hi Piyush,
I would like to know does the same works on EKS since you are gcp. Any reason why did you try the chart from bitnami not from official grafana repo i mean prometheus stack repo.
It would be much appreciated if you add the video for enabling loki also on same cluster and get the log on grafana
Hello, The manifest that we use, its built for Kubernetes that means any kubernetes cluster running anywhere could be cloud GKE/AKS/EKS or on-premises. The application code could have some dependency issues but other than, manifests are standard and follow the same pattern. I'd add the loki integration to my todo list, will do it soon.
Excellent 🎉
Thank you 🙏😊
20:00 slack integration
Thank you for sharing this
I've installed kube-prometheus-stack, it's successfully settled.
Now I want to create alerts, do I need to configure my slack details in values.yaml file or else.
You need to follow the below steps:
kubectl create secret generic alertmanager-slack-webhook --from-literal webhookURL=SLACK_WEBHOOK_URL
kubectl apply -f extras/prometheus/oss/alertmanagerconfig.yaml
Okay great, can you let me know please how can I add and remove alerts from the chart.
Hi, I have managed prometheus in my production env , how to add this prometheus as datasource in grafana
Hello, which cloud are you using? Adding the datasource is same for opensource and managed prometheus. Here's the Grafana installation guide: cloud.google.com/stackdriver/docs/managed-prometheus/query#grafana-deploy
Thankyou
You’re welcome 😊
Can I setup prometheus on my GCP VM to monitor GKE cluster or we need to setup prometheus in cluster only?
If you setup in the cluster itself it be a better choice. If you do that in a VM, then it will be a single point of failure, if the instance goes down the prometheus will go down with it, issues with scaling and everything. Then you have to use an autoscaling group(in AWS) or MIG(in GCP) or VMSS(in Azure), it will be a costly and complex solution. That is why Kubernetes will be the right choice if you already have apps runnning on kubernetes.
Hi piyush I just completed all the tutorials regarding Azure Devops, and through udemy I completed all the courses regarding kuberntes too..I can able to clear interview but during the client round I faced a question about how many kuberntes clusters you are using in your current project.. Please provide a rough answer to these for future reference.
Hello Ravi, Thank you for reaching out. The number of kubernetes cluster you use is dependent on the number of projects you are handling and number of environments you have access to.
For example, if you are an ops person, you would have the access to all the non-dev clusters. UAT, Prod, Pre-prod etc
If you are a developer, you would only have access to the dev clusters.
You can form your answer accordingly.
@@TechTutorialswithPiyush yeah Piyush...I don't have experience with kuberntes in real time, can I answer it as 4 clusters regarding the Dev environment.
@@ravireddy270 your call, but prepare your next answer, why 4 clusters for dev?
Is GKE Autopilot still restricted or is it possible to see the data?
Hey, What type of data are we talking about? you mean Node details?
If we are using Kubernetes on our On premises server we get access to everything and we manage everything on our own including control plane., worker nodes, workloads etc.
If you dont want to manage the control plane ourself, we use a managed Kubernetes service such as GKE standard which will give us access to our worker nodes and workloads however, if you dont want to manage your worker nodes, you use GKE Autopilot in which you can focus on your workload and control plane + worker nodes are managed by GKE. I hope this answered your question.
can i ask why the number of nodes is 4.78 but not 4 or 5 or any natural number?
Hello, Can you provide some context? Was it there in the video? Can you pinpoint the timestamp?
hi piyush pls give the link of hashnode for EKS tutorial..... thanks
Thank you for reminding me, just added the link in the description. Here as well
devo.hashnode.dev/comprehensive-aws-eks-cluster-monitoring-with-prometheus-grafanaand-efk-stack-10weeksofcloudops
how to alert mail instead of webhook
You need to add smtp details in alert manager yaml instead of webhook
@@TechTutorialswithPiyush tnx
sir can i use servicemonitor instead of probe for monitor endpoint
Please how can we connect I love you videos and would love a paid course if any
I dont do paid training but you can reach out to me over Discord or Linkedin for any guidance,
@@TechTutorialswithPiyush Alright sending you a message soon