After following this, I was able to have all the components you showed. I found that you have excluded pv creation, elastoc secrets etc without which this installation will not work. Including them would make it more useful!!
hey wanted to ask something, the configmap and rbac which we are applying, how will the helm chart of fluentd know about it, like where do we mention these files in the values.yaml of fluentd??
@@bhoopeshdevops sir I have tried a lot of things but the fluentd is unable to send logs to elasticsearch, if you are free can we please have a small meet where I can share my screen and show you the issue
I have deployed five different microservices in a Kubernetes cluster as deployments. I have also configured Filebeat on my worker nodes. Now, I need to create an index in Kibana for each microservice. Could you please provide me with the Filebeat configuration file for this? I am using AWS EKS. I am only getting default index as filebeat-
You can modify the filebeat configuration for each different 5 Microservices separately and put index name in it..will create a session for the same soon. Let me know if it is fine ?
@@bhoopeshdevops Yes it is fine but what if i add 6th service after few day then again i need edit filebeat configuration? Please create one session for this.
Hey thank you for the great video. Everything is working, but when i try to access kibana on the browser, it keeps saying "Kibana server is not ready yet", even after a long time. Do you know how to fix this? Greets!
@@bhoopeshdevops Hey, i pasted the error logs twice already, but they keep geeting deleted. Is that you or youtube? Maybe the format of the logs gets blocket due to bad input.
I am assuming to persist the logs in elastic search, you would have to add pv so that kibana can show historical logs even if pods are restarted or deleted. Am i correct?
Hi , it’s nice video can you please make a similar video for opensearch and opensearch dashboard since most of them are using that nowadays after docker runtime migration
@@bhoopeshdevops I have the same problem. I know Elasticsearch very well but I am new to Kubernetes... How can I change the log location? I can't find this configmap...
Bro I am new to kubernates..we have existing kubernates cluster with Linux worker node I am trying to setup efk for these but I am unable to provision statefulset and kibana deployment using docker image but nginx image is working and provisioning both but the major problem here is I am not able to run kubectl port forward command it's getting error
I have successfully set up everything and can retrieve the logs. However, the issue lies in obtaining logs directly from files within the container, rather than relying on 'kubectl logs.' Do you have any ideas or hints on how to achieve this?
can we give public ip instead of local host. i gave but i got this error while executing systemctl enable command "unable to resolve host ip-172-31-59-93: Temporary failure in name resolution"
@@bhoopeshdevops I' already tried setting up with fluentd but getting this error. nexpected error error_class=RuntimeError error="can't call S3 API. Please check your credentials or s3_region configuration. error = #" dont know why getting this it's have necessary permissions and all but still geting. can you sort out this or make a video on this requirement.
Short and to the point. I will give this a try. Thanks for the effort.
Thanks a lot for your feedback 🙏
NICE BRO, SHORT, WORKLY , easy to make , thanku very much from Argentina :)
Thanks for your encouraging feedback 🙏
Thanks for your encouraging feedback 🙏
Thank you so much for making this content especially for the installation of Elastic Search,
Thanks for the feedback 🙏
Nice Video, it Very useful for Kubernetes environment to check the logs
Thanks for the feedback 🙂
After following this, I was able to have all the components you showed. I found that you have excluded pv creation, elastoc secrets etc without which this installation will not work. Including them would make it more useful!!
Sure will check the documentation and will create another one if needed 👍
hey wanted to ask something, the configmap and rbac which we are applying, how will the helm chart of fluentd know about it, like where do we mention these files in the values.yaml of fluentd??
Configmap is used in deployment/daemonset, kindly review it once more let me know in case of any additional info
@@bhoopeshdevops sir I have tried a lot of things but the fluentd is unable to send logs to elasticsearch, if you are free can we please have a small meet where I can share my screen and show you the issue
I have deployed five different microservices in a Kubernetes cluster as deployments. I have also configured Filebeat on my worker nodes. Now, I need to create an index in Kibana for each microservice. Could you please provide me with the Filebeat configuration file for this? I am using AWS EKS.
I am only getting default index as filebeat-
You can modify the filebeat configuration for each different 5 Microservices separately and put index name in it..will create a session for the same soon. Let me know if it is fine ?
@@bhoopeshdevops Yes it is fine but what if i add 6th service after few day then again i need edit filebeat configuration?
Please create one session for this.
Sure will do it
Hi, Thank you for video. I have question about it: How do you to delete log ElasticSearch by automation after 7days? So it easy make full disk?
Yes that is easy via index management with policy creation
thanks man for this
Your welcome, let me know if you are looking for something else, also you can explore other videos of ELK in the same playlist 🙏
good work bhoopesh
i have a question, we have not installed logstash , but still why its is shown in kibana dashboard
Fluent bit helm chart is taking care of it
@@bhoopeshdevops but aren't we using fluentd??
@ashketchum3255 Apologies yes fluentd is working as an agent and sending data directly to Elastic
Hey thank you for the great video. Everything is working, but when i try to access kibana on the browser, it keeps saying "Kibana server is not ready yet", even after a long time. Do you know how to fix this? Greets!
Pls check the logs of Kibana pod and see what is the error coming and paste the same here
@@bhoopeshdevops Hey, i pasted the error logs twice already, but they keep geeting deleted. Is that you or youtube? Maybe the format of the logs gets blocket due to bad input.
By RUclips no worries… I will search on the basis of your inputs will let you know
When trying to port-forward svc/elasticsearch 9200 error connection refused
Pls check the logs for elastic search pod and see if there is any error in statefull set creation
Can you please help me to perform same setup on AKS.. I look forward for your response..
I will create another video on the same
@bhoopesh123 Thanks alot.. of possible please explain how to read logs of only one micro service/pod in a desired namespace.. 🙏
I am assuming to persist the logs in elastic search, you would have to add pv so that kibana can show historical logs even if pods are restarted or deleted. Am i correct?
Yes you are absolutely right..Better to use helm charts for it so that all these things are configured easily
@@bhoopeshdevops good one.. keep it up
hi,
can u tell me how can we load the logs of all namespaces in a single kibana dashboard
This is easy..remove all the filters from the pane and save the results in a new dashboard..will create a video soon 🔜 on this item
Hi , it’s nice video can you please make a similar video for opensearch and opensearch dashboard since most of them are using that nowadays after docker runtime migration
Nice Idea, sure will create on Opensearch and Opensearch dashboard
I got error [in_tail_container_logs] pattern not match: "{\"log\":\"\\\\......} when running command: kubectl logs pod/fluentd-.....
Can you help me ?
Kindly change the log location in configmap of fluent
@@bhoopeshdevops I have the same problem. I know Elasticsearch very well but I am new to Kubernetes... How can I change the log location? I can't find this configmap...
@@sebastiant5257 /^(?.+) (?stdout|stderr) [^ ]* (?.*)$/ make this to json
Bro I am new to kubernates..we have existing kubernates cluster with Linux worker node I am trying to setup efk for these but I am unable to provision statefulset and kibana deployment using docker image but nginx image is working and provisioning both but the major problem here is I am not able to run kubectl port forward command it's getting error
What errors are coming while trying to setup a StatefulSet
I have successfully set up everything and can retrieve the logs. However, the issue lies in obtaining logs directly from files within the container, rather than relying on 'kubectl logs.' Do you have any ideas or hints on how to achieve this?
Pls share the location from with in container where you want to capture logs… Sam can be mentioned in configmap
@@bhoopeshdevops want to collect logs from /logs/container/
@@bhoopeshdevops how would u do that?
Pls specify the same path in the configmap
Pls specify the same path in the configmap
can we give public ip instead of local host. i gave but i got this error while executing systemctl enable command "unable to resolve host ip-172-31-59-93: Temporary failure in name resolution"
Yes we can I have used public ip and dns in my other video in the same playlist pls have a look at it, let me know if it works ?
When tried to port forward kibana its lyk kibana server not yet ready
Pls see elastic server pod logs, it takes some time for coming up 🆙
For me the same! have you gotten a fix for this?
Can anyone make a video for deployment of elk+ filebeat using Kubernetes with ssl enabled
Sure i will do it ..it is easy any tentative time when you are looking ??
how to send kubernetes logs to s3
It is easy just send remote write to s3 URL, will create a video soon on this..Let me know if you need any additional use cases??
@@bhoopeshdevops I' already tried setting up with fluentd but getting this error. nexpected error error_class=RuntimeError error="can't call S3 API. Please check your credentials or s3_region configuration. error = #"
dont know why getting this it's have necessary permissions and all but still geting. can you sort out this or make a video on this requirement.