List of questions from the interview: Can you walk me through the CI/CD pipeline you use in your current project, specifically related to Kubernetes? How do you perform rolling updates for your application in Kubernetes without causing downtime? When you create a new version of your Docker image, what steps do you follow? Have you ever worked with horizontal pod autoscaling (HPA) in Kubernetes? If so, how do you set it up? Explain the purpose of persistent storage in Kubernetes and why it's needed. Describe a scenario where you would use Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) in Kubernetes. Have you ever used multiple containers within a single pod in Kubernetes? Provide an example. How do you manage secrets in your Kubernetes project, and what role does Kubernetes Secret play? Can you explain a scenario where you would use a service mesh in Kubernetes, especially in terms of authentication and authorization? Why are Pod Security Policies important in Kubernetes, and how would you implement them to enhance security? Do you work with resource limits and resource quotas in your Kubernetes setup? If yes, how do you set them up? How would you implement horizontal pod scaling based on custom metrics specific to your application's performance indicators? Explain a scenario where pod priority and preemption in Kubernetes would be useful, and have you ever implemented this? Can you differentiate between Kubernetes Jobs and Cron Jobs, and when would you use each? In what situations would you use StatefulSets in Kubernetes, and what benefits do they offer over Deployments? How can you change the number of replicas for a ReplicaSet in Kubernetes, and what should you check for if the replicas are not scaling as expected?
As a dev I am impressed by how smoothly each consequent questions is realted to the preceeding one. Helps one to link the dots and understand everything. Great work
QUESTIONS WITH ANSWERS--(WITH DISCUSSION) CI/CD Pipeline and Kubernetes 🌟 The CI/CD pipeline uses Jenkins with a Git Flow branching strategy. Developers create their own feature branches, which are then merged into the development branch. When a release is ready, a new release branch is created from the development branch, which is then merged into the master branch. Git Flow Branching Strategy Branch Description Feature Developer-created branches for new features Development Main branch for development Release Branch created for releases Master Production-ready code Jenkins Stages SC Checkout Compile Build Test Code Coverage Code Quality Analysis Security Scan Vulnerability Scan Delivery Deployment Java-Based Web Application 📊 The application is written in Java and uses Kubernetes. Experience with Kubernetes The candidate has experience with Kubernetes. Rolling Updates 🔄 To perform a rolling update, follow these steps: Create a deployment YAML file, specifying the desired state of the application. Mention the API version, kind, and metadata in the YAML file. Define the replicas, selectors, and template in the YAML file. Specify the container image, port, and other details in the YAML file. Apply the YAML file using kubectl apply. Specify the rolling update strategy in the YAML file. Creating a New Version of the Docker Image To create a new version of the Docker image: Create a Dockerfile. Build an image from the Dockerfile. Use the new image for the pods. Horizontal Pod Autoscaling (HPA) ⚖ Benefits of HPA "HPA is a resource that automatically scales the number of pods in a deployment, replica set, or stateful set based on metrics." Setting up HPA Enable the metric server in the cluster. Create a horizontal pod autoscaler YAML file, specifying the scale target reference, metrics, and other details. Apply the YAML file using kubectl apply. Monitor the HPA using kubectl get hpa and kubectl describe hpa. Custom Metrics Use custom metrics to enable HPA. Enable the metrics server in the cluster. Define custom metrics using Prometheus or other tools. Persistent Storage 📁 Purpose of Persistent Storage "Persistent storage refers to the ability to store and retain data beyond the lifetime of a single container. It allows data to be saved and accessed even if pods are rescheduled or terminated, or if pods move to different nodes within the cluster." Setting up Persistent Storage Create a storage class, specifying the type of storage. Create a persistent volume claim (PVC), requesting storage resources based on the storage class. Mount the PVC in the application. Deploy the application. PV and PVC Term Description PV Persistent Volume, a piece of storage in the cluster provisioned by the administrator PVC Persistent Volume Claim, a request for storage resources by the user or application Multicontainer Pods 🚀 Experience with Multicontainer Pods The candidate has experience with multicontainer pods, including sidecar containers. Using Multicontainer Pods Use sidecar containers for logging or monitoring. Deploy a pod with multiple containers, each running a different application or service. Managing Secrets 🔒 Kubernetes Secrets "Kubernetes secrets store sensitive information in an encrypted format." Using Kubernetes Secrets Store sensitive information in secrets. Use secrets to manage confidential data in the cluster.## Encryption and Secret Management 💻 Storing Sensitive Information We store sensitive information, such as API keys and database passwords, in a secret file using a basic algorithm. The key values are stored in a HashiCorp's Vault (Hashar W) based on the environment. Integrating with Jenkins We integrate Hashar W with Jenkins, which touches the passwords and secrets stored in the Vault. Service Mesh in Kubernetes 🌐 Load Balancing and Traffic Management We use a service mesh, specifically Istio, for load balancing and traffic management. It also provides mutual TLS, certificate, and server certificate verification. Port Security Policies in Kubernetes 🔒 Defining Security Configurations Port security policies allow administrators to define rules for controlling security configurations for pods. We implement port security policies using the PortSecurity admission controller. Enable the PortSecurity admission controller Write an admission.config file with the enabled admission plugins Define a constraint template with API version, kind, and spec Apply the constraint to enforce the host path policy Custom Resource Definitions (CRDs) in Kubernetes 📚 Defining and Using Custom Resources CRDs allow us to define and use custom resources within our Kubernetes cluster. They help extend the Kubernetes API and create custom objects for specific applications. Example: We used CRDs for operations and controllers like Prometheus, creating custom resources for managing Prometheus instances. Network Policies in Kubernetes 🔗 Fine-Grained Control of Network Traffic Network policies provide fine-grained control of network traffic within the cluster. We define rules and policies for controlling communication between pods. Ingress and egress rules Part selector field specifying which pods the policy applies to Name and namespace scope Rules are evaluated in order, with the first matching rule taking precedence Main Use Case: Implementing security and compliance by restricting access to sensitive data and services. Resource Quotas and Limits in Kubernetes ⚖ Ensuring Fair Resource Allocation Resource quotas and limits ensure fair resource allocation and prevent resource exhaustion. Factors to Consider: Resource requirements of the application Critical applications requiring guaranteed resources Less critical applications with burstable resources Namespace isolation and capacity planning Implementation: Define a resource quota based on CPU, memory, and other resources Attach the resource quota to a specific namespace Horizontal Pod Scaling based on Custom Metrics 📈 Scaling Applications based on Business-Specific Metrics We implement horizontal pod scaling based on custom metrics using a monitoring tool and a custom metric provider. Steps: Deploy a metric server in the Kubernetes cluster Implement a custom metric provider exposing application-specific metrics Deploy and configure the Kubernetes custom metric API server Use the custom metric API server with the Horizontal Pod Autoscaler (HPA) Example: We used Prometheus as a monitoring tool and the Prometheus adapter for Kubernetes to expose custom metrics.## Resource Management in Kubernetes 📈 Critical Parts and Prioritization In order to ensure that critical parts of the system receive the resources they need, we assign the highest priority (PR) to them. This ensures that they continue to function reliably, even in emergency scenarios where resources are scarce. Priority Class Name Kubernetes provides a field called Priority Class Name, which allows us to assign a priority to pods. This feature can be enabled through the Priority and Preemption feature in Kubernetes. Kubernetes Job vs. Cron Job Kubernetes Job Cron Job Purpose Run a single task to completion Run a task periodically Design Designed for short-lived tasks Designed for long-lived tasks Example Batch processing, data migration Backups, data synchronization Restart Pods can be automatically restarted if they fail Automatically manages scheduling and execution Definition: A Kubernetes Job is a resource used to run a single task to completion. It is designed for short-lived tasks and can be used for batch processing or data migration. Definition: A Cron Job is a resource used to run a task periodically. It is designed for long-lived tasks and can be used for backups or data synchronization. Stateful Sets and Deployments Stateful Sets Deployments Purpose Manage stateful applications Manage stateless applications Design Designed for applications with ordered initialization requirements Designed for horizontal scaling Example Databases, distributed systems Web servers, microservices Volume Claims Automatically manage PVCs for each pod Does not manage PVCs Definition: A Stateful Set is a resource used to manage stateful applications, such as databases or distributed systems. It provides stable network identities and can automatically manage Persistent Volume Claims (PVCs) for each pod. Definition: A Deployment is a resource used to manage stateless applications, such as web servers or microservices. It is designed for horizontal scaling and does not manage PVCs. Changing Replica Counts There are two ways to change the replica count of a running replica set: Method 1: Edit the replica set file and save it. Method 2: Use the imperative command kubectl scale with the desired replica count. Troubleshooting Replica Count Issues If the replica count is not changing as expected, check for: Error messages or warnings in the output logs. Resource constraints specified in the replica set. Pod termination delays. Part disruptions, such as min ready seconds. By checking these areas, you can identify and resolve issues preventing the replica count from changing.
Answer to the bonus question would be when we have applied hpa to an app with minimum replicas 3 and then if we try to scale it down to 2 it won't scale down
if you don't mind, can you add sections for each question in the future videos so it will be easy for the audience to navigate through each question asked. Thanks!
24:25. When you preferred statedulsets over deployment. Ans. In case of database deployment. Like mysql application. Statefulsets are used for deploying stateful application like database and distribution system and all. Pods in statefulsets have stable network identity , i.e they have a permanent name and can be accessed using persistence dns names. Also when you scale the statefulsets , process is more controlled and seqiential. Statefulsets can automatically manage the pvc for each pod, providing persistent storage. While deployment is for stateless like webserver, mainly suited for horizontal scaling application , where microservices replicas are interchangeable. Given random names
Hi Mike, Myself dinesh from London It's an amazing video who is eagerly looking for DevOps culture interviews this video will be quite enough for K8s concept. Almost You guys covered up on major topics in K8s. The all the questions was crazy and answers had short and crispy. Please do your work for long to be frank I'm telling its very useful for me. Keep rocking !.....If possible can you do AWS & Azure related scenario based it will be really helpful for everyone thanks mate !.....all the best for your future.!.....
Limits and resource quota set up using crds. Whenever there is situation of fair resource allocation and preventing resource exhaustion, we use these resource quota and limits. We define resource quota by checking cpu,ram and other resources . Once this is done we attach resource quota to certain namespace, whatever we have.
I am working in infra support monitoring. I dnt hv knowledege to give ans i am attnding interviewa these days asking mostly these type questions it helps me alot
Thanks for the support, Rohit. Folks like these are not easily reachable. It's an old interview, after 3-4 months of mail chains he gave me permission to post this interview.
@bouns question: at run time using $kubectl scale --replicas=2 deployment.yaml ( as run time take more priority then declarative manifesto). I believe once u hit this command kube-scheduler take ur request and gives to replication controller and controller read imperatives as two replicas. So we would have two pods when u check kubectl get pods
@@LogicOpsLab not sure ,if we do any modifications on pod metadata does it work ? But my assumption is replicaset functionality is to maintain the desired state of pods if we request two via run time it must be two pods running on cluster. Pls share correct approach to dis scenario.
@mohammedilyas3033 @@rohanekar You go to the kube-system, there you check the kube-controller, see if everything is correct or not. Fix it, restart everything, it will work.
@@LogicOpsLab i have interview on Monday, could you please help me with some document with scenario based questions for terraform, kubernetes, git, docker, Linux?
I'd suggest a mere document won't help. Just go through all the relevant video and learn. You will forget easily, while listening them like a podcast will help you better.
To perform canary deployments in Argo CD, you can follow these general steps: 1. Ensure that Argo CD is properly installed and configured in your Kubernetes cluster. 2. Define your application manifests in a Git repository. These manifests will include multiple versions of your application for canary deployment. 3. Create Argo CD Application custom resources for each version of your application. Specify the desired replicas, service names, and any other relevant settings. 4: Use annotations in your Kubernetes manifests to define canary deployment strategies. Argo CD supports annotations like `argocd.argoproj.io/rollouts`, where you can specify canary deployment settings. This is just a generic thing, this can be modified accordingly. 5. Trigger the sync process in Argo CD to apply the changes and start the canary deployment. Here's a simplified example of how you might use annotations for canary deployment: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-app annotations: argocd.argoproj.io/rollouts: '{"blueGreen":{"activeService":"my-app-active","previewService":"my-app-preview"}}' spec: replicas: 5 template: metadata: labels: app: my-app spec: containers: - name: my-app image: myregistry/my-app:1.0 ``` In this example, the `argocd.argoproj.io/rollouts` annotation specifies a blue-green deployment strategy with active and preview services. Refer to the Argo CD documentation for the latest and detailed information.
I always discuss the total experience in IT and relative experience in DevOps and Cloud in the first 30 seconds of the video. Looks like people are skipping the intro 😕
List of questions from the interview:
Can you walk me through the CI/CD pipeline you use in your current project, specifically related to Kubernetes?
How do you perform rolling updates for your application in Kubernetes without causing downtime?
When you create a new version of your Docker image, what steps do you follow?
Have you ever worked with horizontal pod autoscaling (HPA) in Kubernetes? If so, how do you set it up?
Explain the purpose of persistent storage in Kubernetes and why it's needed.
Describe a scenario where you would use Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) in Kubernetes.
Have you ever used multiple containers within a single pod in Kubernetes? Provide an example.
How do you manage secrets in your Kubernetes project, and what role does Kubernetes Secret play?
Can you explain a scenario where you would use a service mesh in Kubernetes, especially in terms of authentication and authorization?
Why are Pod Security Policies important in Kubernetes, and how would you implement them to enhance security?
Do you work with resource limits and resource quotas in your Kubernetes setup? If yes, how do you set them up?
How would you implement horizontal pod scaling based on custom metrics specific to your application's performance indicators?
Explain a scenario where pod priority and preemption in Kubernetes would be useful, and have you ever implemented this?
Can you differentiate between Kubernetes Jobs and Cron Jobs, and when would you use each?
In what situations would you use StatefulSets in Kubernetes, and what benefits do they offer over Deployments?
How can you change the number of replicas for a ReplicaSet in Kubernetes, and what should you check for if the replicas are not scaling as expected?
🙏🙏🙏
This was the pure value addition. Thanks mate.
Indeed,@@arshadsiddieque7097
🙏
Answer for 2nd question rolling update is totally wrong
This is called perfect interview and it will be useful for the currently working devops engineers to understand the kubernetes in broad manner....
Made my day! Appreciate it.
As a dev I am impressed by how smoothly each consequent questions is realted to the preceeding one. Helps one to link the dots and understand everything. Great work
This is really supporting. Means a lot, mate.
QUESTIONS WITH ANSWERS--(WITH DISCUSSION)
CI/CD Pipeline and Kubernetes 🌟
The CI/CD pipeline uses Jenkins with a Git Flow branching strategy. Developers create their own feature branches, which are then merged into the development branch. When a release is ready, a new release branch is created from the development branch, which is then merged into the master branch.
Git Flow Branching Strategy
Branch Description
Feature Developer-created branches for new features
Development Main branch for development
Release Branch created for releases
Master Production-ready code
Jenkins Stages
SC Checkout
Compile
Build
Test
Code Coverage
Code Quality Analysis
Security Scan
Vulnerability Scan
Delivery
Deployment
Java-Based Web Application 📊
The application is written in Java and uses Kubernetes.
Experience with Kubernetes
The candidate has experience with Kubernetes.
Rolling Updates 🔄
To perform a rolling update, follow these steps:
Create a deployment YAML file, specifying the desired state of the application.
Mention the API version, kind, and metadata in the YAML file.
Define the replicas, selectors, and template in the YAML file.
Specify the container image, port, and other details in the YAML file.
Apply the YAML file using kubectl apply.
Specify the rolling update strategy in the YAML file.
Creating a New Version of the Docker Image
To create a new version of the Docker image:
Create a Dockerfile.
Build an image from the Dockerfile.
Use the new image for the pods.
Horizontal Pod Autoscaling (HPA) ⚖
Benefits of HPA
"HPA is a resource that automatically scales the number of pods in a deployment, replica set, or stateful set based on metrics."
Setting up HPA
Enable the metric server in the cluster.
Create a horizontal pod autoscaler YAML file, specifying the scale target reference, metrics, and other details.
Apply the YAML file using kubectl apply.
Monitor the HPA using kubectl get hpa and kubectl describe hpa.
Custom Metrics
Use custom metrics to enable HPA.
Enable the metrics server in the cluster.
Define custom metrics using Prometheus or other tools.
Persistent Storage 📁
Purpose of Persistent Storage
"Persistent storage refers to the ability to store and retain data beyond the lifetime of a single container. It allows data to be saved and accessed even if pods are rescheduled or terminated, or if pods move to different nodes within the cluster."
Setting up Persistent Storage
Create a storage class, specifying the type of storage.
Create a persistent volume claim (PVC), requesting storage resources based on the storage class.
Mount the PVC in the application.
Deploy the application.
PV and PVC
Term Description
PV Persistent Volume, a piece of storage in the cluster provisioned by the administrator
PVC Persistent Volume Claim, a request for storage resources by the user or application
Multicontainer Pods 🚀
Experience with Multicontainer Pods
The candidate has experience with multicontainer pods, including sidecar containers.
Using Multicontainer Pods
Use sidecar containers for logging or monitoring.
Deploy a pod with multiple containers, each running a different application or service.
Managing Secrets 🔒
Kubernetes Secrets
"Kubernetes secrets store sensitive information in an encrypted format."
Using Kubernetes Secrets
Store sensitive information in secrets.
Use secrets to manage confidential data in the cluster.## Encryption and Secret Management 💻
Storing Sensitive Information
We store sensitive information, such as API keys and database passwords, in a secret file using a basic algorithm. The key values are stored in a HashiCorp's Vault (Hashar W) based on the environment.
Integrating with Jenkins
We integrate Hashar W with Jenkins, which touches the passwords and secrets stored in the Vault.
Service Mesh in Kubernetes 🌐
Load Balancing and Traffic Management
We use a service mesh, specifically Istio, for load balancing and traffic management. It also provides mutual TLS, certificate, and server certificate verification.
Port Security Policies in Kubernetes 🔒
Defining Security Configurations
Port security policies allow administrators to define rules for controlling security configurations for pods. We implement port security policies using the PortSecurity admission controller.
Enable the PortSecurity admission controller
Write an admission.config file with the enabled admission plugins
Define a constraint template with API version, kind, and spec
Apply the constraint to enforce the host path policy
Custom Resource Definitions (CRDs) in Kubernetes 📚
Defining and Using Custom Resources
CRDs allow us to define and use custom resources within our Kubernetes cluster. They help extend the Kubernetes API and create custom objects for specific applications.
Example: We used CRDs for operations and controllers like Prometheus, creating custom resources for managing Prometheus instances.
Network Policies in Kubernetes 🔗
Fine-Grained Control of Network Traffic
Network policies provide fine-grained control of network traffic within the cluster. We define rules and policies for controlling communication between pods.
Ingress and egress rules
Part selector field specifying which pods the policy applies to
Name and namespace scope
Rules are evaluated in order, with the first matching rule taking precedence
Main Use Case: Implementing security and compliance by restricting access to sensitive data and services.
Resource Quotas and Limits in Kubernetes ⚖
Ensuring Fair Resource Allocation
Resource quotas and limits ensure fair resource allocation and prevent resource exhaustion.
Factors to Consider:
Resource requirements of the application
Critical applications requiring guaranteed resources
Less critical applications with burstable resources
Namespace isolation and capacity planning
Implementation:
Define a resource quota based on CPU, memory, and other resources
Attach the resource quota to a specific namespace
Horizontal Pod Scaling based on Custom Metrics 📈
Scaling Applications based on Business-Specific Metrics
We implement horizontal pod scaling based on custom metrics using a monitoring tool and a custom metric provider.
Steps:
Deploy a metric server in the Kubernetes cluster
Implement a custom metric provider exposing application-specific metrics
Deploy and configure the Kubernetes custom metric API server
Use the custom metric API server with the Horizontal Pod Autoscaler (HPA)
Example: We used Prometheus as a monitoring tool and the Prometheus adapter for Kubernetes to expose custom metrics.## Resource Management in Kubernetes 📈
Critical Parts and Prioritization
In order to ensure that critical parts of the system receive the resources they need, we assign the highest priority (PR) to them. This ensures that they continue to function reliably, even in emergency scenarios where resources are scarce.
Priority Class Name
Kubernetes provides a field called Priority Class Name, which allows us to assign a priority to pods. This feature can be enabled through the Priority and Preemption feature in Kubernetes.
Kubernetes Job vs. Cron Job
Kubernetes Job Cron Job
Purpose Run a single task to completion Run a task periodically
Design Designed for short-lived tasks Designed for long-lived tasks
Example Batch processing, data migration Backups, data synchronization
Restart Pods can be automatically restarted if they fail Automatically manages scheduling and execution
Definition: A Kubernetes Job is a resource used to run a single task to completion. It is designed for short-lived tasks and can be used for batch processing or data migration.
Definition: A Cron Job is a resource used to run a task periodically. It is designed for long-lived tasks and can be used for backups or data synchronization.
Stateful Sets and Deployments
Stateful Sets Deployments
Purpose Manage stateful applications Manage stateless applications
Design Designed for applications with ordered initialization requirements Designed for horizontal scaling
Example Databases, distributed systems Web servers, microservices
Volume Claims Automatically manage PVCs for each pod Does not manage PVCs
Definition: A Stateful Set is a resource used to manage stateful applications, such as databases or distributed systems. It provides stable network identities and can automatically manage Persistent Volume Claims (PVCs) for each pod.
Definition: A Deployment is a resource used to manage stateless applications, such as web servers or microservices. It is designed for horizontal scaling and does not manage PVCs.
Changing Replica Counts
There are two ways to change the replica count of a running replica set:
Method 1: Edit the replica set file and save it.
Method 2: Use the imperative command kubectl scale with the desired replica count.
Troubleshooting Replica Count Issues
If the replica count is not changing as expected, check for:
Error messages or warnings in the output logs.
Resource constraints specified in the replica set.
Pod termination delays.
Part disruptions, such as min ready seconds.
By checking these areas, you can identify and resolve issues preventing the replica count from changing.
Answer to the bonus question would be when we have applied hpa to an app with minimum replicas 3 and then if we try to scale it down to 2 it won't scale down
if you don't mind, can you add sections for each question in the future videos so it will be easy for the audience to navigate through each question asked. Thanks!
Sure, I will.
24:25. When you preferred statedulsets over deployment.
Ans. In case of database deployment. Like mysql application.
Statefulsets are used for deploying stateful application like database and distribution system and all.
Pods in statefulsets have stable network identity , i.e they have a permanent name and can be accessed using persistence dns names.
Also when you scale the statefulsets , process is more controlled and seqiential.
Statefulsets can automatically manage the pvc for each pod, providing persistent storage.
While deployment is for stateless like webserver, mainly suited for horizontal scaling application , where microservices replicas are interchangeable. Given random names
he has spoked exactly chatgpt words I searched forbthe same..😂😂😂
Thanks for chiming in.
Hi Mike,
Myself dinesh from London
It's an amazing video who is eagerly looking for DevOps culture interviews this video will be quite enough for K8s concept. Almost You guys covered up on major topics in K8s. The all the questions was crazy and answers had short and crispy. Please do your work for long to be frank I'm telling its very useful for me. Keep rocking !.....If possible can you do AWS & Azure related scenario based it will be really helpful for everyone thanks mate !.....all the best for your future.!.....
Thanks a lot for such kind words.
Limits and resource quota set up using crds.
Whenever there is situation of fair resource allocation and preventing resource exhaustion, we use these resource quota and limits.
We define resource quota by checking cpu,ram and other resources . Once this is done we attach resource quota to certain namespace, whatever we have.
Thanks for chiming in, mate.
i am also preparing for an interview so these is good question
Thanks a lot, mate! Best Wishes!
I am working in infra support monitoring. I dnt hv knowledege to give ans i am attnding interviewa these days asking mostly these type questions it helps me alot
Glad it was helpful.
Am i the only one who feels like this guy is more like reading the answers than answering?
there wasnt much questions on scenario.
Definetly reading the answers, you can tell by the structure of his answers. 13:47 is one of the most obvious
Amazingly executed. Thanks to the efforts you put in.
Appreciate your support.
It was really good one dude we request you to share more videos on this person really he is verry confident and potential person :)
Thanks for the support, Rohit. Folks like these are not easily reachable. It's an old interview, after 3-4 months of mail chains he gave me permission to post this interview.
Ohh but it's really great 👍 👌
i think the candidate is the trainer who is giving tarining's any way this upload is very use full
Thanks for the feedback.
Wow bro, really its too much informative videos ❤
Glad you liked it!
Very useful one thank you so much
Means a lot, thank you.
Very useful video, thanks for doing this 😊
Appreciate the feedback.
One of the best kubernetes scenario based interview so far.
Thank you for all the support
thanks for sharing question great work
Means a lot, Jitendra.
Nice interview 👍🏻
Glad you liked it
Thank you for such a informative video
Glad you liked it 🙏🏻
For the last bonus question the answer provided by the candidate is right ? or something else is there ?@@LogicOpsLab
If possible pls rollout CICD QNAs woth this guy it would be verry great
Will try my best, mate. Cheers.
really helpful!
Glad you liked it.
Thank you for sharing.
Glad it was helpful.
Great Questions! Good job dude!
Thanks a lot, mate!
@bouns question: at run time using $kubectl scale --replicas=2 deployment.yaml ( as run time take more priority then declarative manifesto). I believe once u hit this command kube-scheduler take ur request and gives to replication controller and controller read imperatives as two replicas. So we would have two pods when u check kubectl get pods
Good answer. Now, what happens if even after this command things don't work as expected? What would be your thought process?
@@LogicOpsLab not sure ,if we do any modifications on pod metadata does it work ? But my assumption is replicaset functionality is to maintain the desired state of pods if we request two via run time it must be two pods running on cluster. Pls share correct approach to dis scenario.
I am also waiting for the ans
Waiting for the ans @@LogicOpsLab
@mohammedilyas3033 @@rohanekar
You go to the kube-system, there you check the kube-controller, see if everything is correct or not. Fix it, restart everything, it will work.
Good interview
Appreciate the feedback.
@@LogicOpsLab i have interview on Monday, could you please help me with some document with scenario based questions for terraform, kubernetes, git, docker, Linux?
I'd suggest a mere document won't help. Just go through all the relevant video and learn. You will forget easily, while listening them like a podcast will help you better.
What wae the amswer of last question..kf replics are not coming down
Can you please tell me the timestamp?
how we do canary deployment in argocd?
To perform canary deployments in Argo CD, you can follow these general steps:
1. Ensure that Argo CD is properly installed and configured in your Kubernetes cluster.
2. Define your application manifests in a Git repository. These manifests will include multiple versions of your application for canary deployment.
3. Create Argo CD Application custom resources for each version of your application. Specify the desired replicas, service names, and any other relevant settings.
4: Use annotations in your Kubernetes manifests to define canary deployment strategies. Argo CD supports annotations like `argocd.argoproj.io/rollouts`, where you can specify canary deployment settings.
This is just a generic thing, this can be modified accordingly.
5. Trigger the sync process in Argo CD to apply the changes and start the canary deployment.
Here's a simplified example of how you might use annotations for canary deployment:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
annotations:
argocd.argoproj.io/rollouts: '{"blueGreen":{"activeService":"my-app-active","previewService":"my-app-preview"}}'
spec:
replicas: 5
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: myregistry/my-app:1.0
```
In this example, the `argocd.argoproj.io/rollouts` annotation specifies a blue-green deployment strategy with active and preview services.
Refer to the Argo CD documentation for the latest and detailed information.
@@LogicOpsLab Thanks bro
All questions are goods but one question u ask on k8s no sense between k8s jobs and cron jobs
Timestamp?
how many years of experience he have on kubernetes ?
Less than 3 yrs, IIRC.
What is the experience of that guys giving interview?
I always discuss the total experience in IT and relative experience in DevOps and Cloud in the first 30 seconds of the video. Looks like people are skipping the intro 😕
hi
Hello
Very difficult to understand the answers.
Apologies, mate. Appreciate the feedback. Did the subtitles help?
@@LogicOpsLab Very much so, thank you.
Please add answer audio not useful
👍🏻
Selected ha bro?
What do you think?
Selected I guess
🤝🏻