Hi, I know I say this every video, but you don't seem to disappoint. Clean visuals, great information, also some edge cases here and there that you explain what the problems arising would be and how to solve them. All in all, really amazing work you put in, Anton!
2:00 Some error here: Multiple pods can use the same PVC of type RWO if it's scheduled on the same node. RWO doesn't make the PVC exclusive to one pod. It restricted access from pods that are not on the same node.
Well, technically yes, but in real life, it's never the case with hundreds of Kubernetes nodes, unless you use selectors or affinity. However, that reduces availability, and if that node fails, all your application instances become unavailable.
@AntonPutra I don't know if it's an anti pattern or not. But I used the property to my advantage. I created one PVC with RWO (EBS only allowed RWO), and then I scheduled some services to use these same PVC to store their logs. Then, I created another pod to do the log aggregation and shipping. That way, I don't need to create a sidecar container to ship the log for each service. I haven't set any affinity for any of the services, so I think Kubernetes is also smart enough to assign these pods to the same node where the PV resided.
around 4:07 ... do only certain CSI's support this volume re-attach across nodes? i would guess if a volume on disk of a specific node it would not be moved to another node by pod reschedule?
It's the default behavior for the network-attached volumes. Now, if you use the Local Persistent Volume Controller, it allocates volumes from the underlying disks and ensures the same pod is attached to the same node (useful if you need fast local disks, for example, for Kafka, Cassandra, etc.).
Thanks for that video! In short, Deployment to use for the creation as many replicas as we want. StatefulSet to assign PVC at once node is created when the labels match. ReplicaSet to be sure that each time the new node is created the new pod landed there and it us useful for example in monitoring use cases. Do I miss something?
You need StorageClass in ether case. 1. If you use deployment you need to create pvc using a storage class. 2 Use volumeclaimtemplate feature of statefulset to dynamically create pvc for each replica of that sts. StorageClass only responsible for allocating volume from the cloud provider.
This is a great video. I love the pictorial representation of the YAML configs. Well done. If you don't mind me asking, what software do you for editing/making your videos?
in statefulset pods may be deployed in different nodes and nodes may be in different azs , now if a pods is rescheduled in other nodes compared to pevious nodes , how will volumes get attached to that in different nodes of diffrent az , as of my knowledge ebs in only works withing az not different azs .
True, you may have an issue if you have a small cluster. I usually configure storageclass to allocate ebs volumes only from the same az and deploy statefullset in a single az. If you would have large statefull applications deployed in multi az it can be very expensive due to aws data transfer charge.
Hi, I created deployment with 1 replica with RWO, hostpath pvc, inserted data, changed image, new pod started in RollingUpdate mode and my data is present in new pod and no mounting error noticed, is it related to one node cluster or I misunderstood something? (the same with 2 replicas)
Yes, ReadWriteOnce technically applies to the underlying node, not the pod. In testing small clusters, it's very possible to schedule pods on the same node, but in production environments with hundreds of Kubernetes nodes, it's not. So, it's still possible to use pod affinity to schedule pods on the same node, but it would be a Kubernetes anti-pattern.
Request for Additional Tutorials on Cloud Migration I would like to express my sincere gratitude for the invaluable DevOps tutorials you have been sharing on your RUclips channel. They have been a tremendous resource for both learning new concepts and brushing up on existing knowledge in the field of DevOps. I am particularly interested in the topic of cloud migration, including scenarios such as migrating from on-premises to cloud environments, as well as inter-cloud migrations (e.g., AWS to GCP, Azure to AWS, and vice versa). If you have any existing tutorials on these topics, I would be grateful if you could point me in their direction. Additionally, if you do not currently have tutorials covering these specific areas, I kindly request that you consider creating content on cloud migration in the future. Such tutorials would be incredibly beneficial for many professionals in the DevOps community, including myself. Thank you once again for your dedication to sharing knowledge and contributing to the growth of the DevOps field. Your efforts are greatly appreciated. Best regards, Ibrahim
Hello, I'm stuck in a situation where I scaled the Grafana replica set to 3 and surprisingly all three are running using the same PVC which is a single EBS of type gp2. I'm scratching my head how is it possible for a single EBS volume to get attached to three different pods? Please help me. Details looks like this: Access Modes - ReadWriteOnce Storage Class Name - gp2 Storage - 10Gi Pods - loki-grafana-5fd8756bb8-c2rcq loki-grafana-5fd8756bb8-hjfcn loki-grafana-5fd8756bb8-n2cwt Status - Bound
Hi, yes, it's technically ReadWriteOnce for the underlying node, not the pod. However, most of the time, you should never rely on that behavior except in special cases when you intentionally configure it using podAffinity.
@@AntonPutra Hmmm I read about readWriteOnce means that the volume can be mounted in a single node, so, many pods can mounted if they're in the same node. Am I wrong?
@@breinerfranciscobatallacai8379 Yes and no. You can mount "readWriteOnce" volume to multiple pods only if pods are created on the same node, which is most of the time not the case and not practical (you can do it with podAffinity if you want).
Sure, I have lots of examples in my GitHub repository. Are you interested in any specific ones? github.com/antonputra/tutorials/blob/main/docs/contents.md
🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com
This is the most professionally valuable advice I have found on RUclips so far. You really break down the practical uses. Great job!
Thanks!
Hi, I know I say this every video, but you don't seem to disappoint. Clean visuals, great information, also some edge cases here and there that you explain what the problems arising would be and how to solve them. All in all, really amazing work you put in, Anton!
Great videos, looking forward to new ones on Kubernetes. Short videos like these are refreshing and fun to watch.
Thank you Xaoticex!
this is gold, Anton! thank you from Argentina!
shortly and greatly explained . really cleared my doubts among statefulset and deployment
Thank you! More stuff like that coming soon :)
Thanks Anton for the awesome videos. Great content and well explained.. Keep up the good work !!!
thank you!
2:00 Some error here: Multiple pods can use the same PVC of type RWO if it's scheduled on the same node. RWO doesn't make the PVC exclusive to one pod. It restricted access from pods that are not on the same node.
Well, technically yes, but in real life, it's never the case with hundreds of Kubernetes nodes, unless you use selectors or affinity. However, that reduces availability, and if that node fails, all your application instances become unavailable.
@AntonPutra I don't know if it's an anti pattern or not. But I used the property to my advantage. I created one PVC with RWO (EBS only allowed RWO), and then I scheduled some services to use these same PVC to store their logs. Then, I created another pod to do the log aggregation and shipping. That way, I don't need to create a sidecar container to ship the log for each service. I haven't set any affinity for any of the services, so I think Kubernetes is also smart enough to assign these pods to the same node where the PV resided.
Thanks a lot , very informative
Hope to see detailed video of aws OIDC in EKS and how a service account uses it
Thanks, I already have a bunch of EKS tutorials on my channel, even with Terragrunt.
ruclips.net/video/yduHaOj3XMg/видео.html
Антон, ты реально шикарен!!!! 🌟🌟🌟🌟🌟 Контент - АГОНЬ!!!! 🔥🔥🔥🔥🔥Заходит в КАЙФ!!! 👏🔥🌟🚀
Spasibo:)
Amazing like always! Thanks!!!
Thanks Gabriel!
Very good explanation, a quick refresher videos. thx
thanks!
around 4:07 ...
do only certain CSI's support this volume re-attach across nodes? i would guess if a volume on disk of a specific node it would not be moved to another node by pod reschedule?
It's the default behavior for the network-attached volumes. Now, if you use the Local Persistent Volume Controller, it allocates volumes from the underlying disks and ensures the same pod is attached to the same node (useful if you need fast local disks, for example, for Kafka, Cassandra, etc.).
@@AntonPutra makes sense, thanks!!
Thanks Anton...great and clear content. New Sub.
Thank you!
Awesome explanation, hats off to you
Thanks! Appreciate it!
Thanks for that video! In short, Deployment to use for the creation as many replicas as we want. StatefulSet to assign PVC at once node is created when the labels match. ReplicaSet to be sure that each time the new node is created the new pod landed there and it us useful for example in monitoring use cases. Do I miss something?
Typically you don't create replicaset yourself, it's managed by the deployment object
Thanks Anton for cool video, video is short but very informative
thanks!
1:27 what is LB in this case? Ingress?
Hey @AntonPutra, your recent kubernetes contents has been top class👏. Let's get those into a playlist?
Sure =) Thanks for visiting!
i'd love to see the tutorial on how to setup k8s with autoscaling on hetzner. i'd consider the sending the monetary support for this as well.
Thanks! Hetzner looks like a German hosting provider. I have never used it before :)
@@AntonPutra it's amazing, cheap and they have good servers. probably the best is the server auction where you can get really nice servers for cheap
@@woss_io ok, I'll take a look
the solution of dynamicly create pvc by StatefulSet , we can just make a StorageClass without the needs of StatefulSet, im a right ?
You need StorageClass in ether case. 1. If you use deployment you need to create pvc using a storage class. 2 Use volumeclaimtemplate feature of statefulset to dynamically create pvc for each replica of that sts. StorageClass only responsible for allocating volume from the cloud provider.
This is a great video. I love the pictorial representation of the YAML configs. Well done.
If you don't mind me asking, what software do you for editing/making your videos?
thanks, i use adobe
Awesome video as always. Please make a video about Ceph and Rook. Thanks a lot.
Thanks, sure in the future
in statefulset pods may be deployed in different nodes and nodes may be in different azs , now if a pods is rescheduled in other nodes compared to pevious nodes , how will volumes get attached to that in different nodes of diffrent az , as of my knowledge ebs in only works withing az not different azs .
True, you may have an issue if you have a small cluster. I usually configure storageclass to allocate ebs volumes only from the same az and deploy statefullset in a single az. If you would have large statefull applications deployed in multi az it can be very expensive due to aws data transfer charge.
Beautiful explaination!
thanks!
Hi, I created deployment with 1 replica with RWO, hostpath pvc, inserted data, changed image, new pod started in RollingUpdate mode and my data is present in new pod and no mounting error noticed, is it related to one node cluster or I misunderstood something? (the same with 2 replicas)
Yes, ReadWriteOnce technically applies to the underlying node, not the pod. In testing small clusters, it's very possible to schedule pods on the same node, but in production environments with hundreds of Kubernetes nodes, it's not. So, it's still possible to use pod affinity to schedule pods on the same node, but it would be a Kubernetes anti-pattern.
Thanks :)
Hi, for deployment of Prometheus Agent as DaemonSet, could you recommend a storage strategy/mode for WAL?
Take a look at Local Persistent Volumes, it's almost like host path but better
Request for Additional Tutorials on Cloud Migration
I would like to express my sincere gratitude for the invaluable DevOps tutorials you have been sharing on your RUclips channel. They have been a tremendous resource for both learning new concepts and brushing up on existing knowledge in the field of DevOps.
I am particularly interested in the topic of cloud migration, including scenarios such as migrating from on-premises to cloud environments, as well as inter-cloud migrations (e.g., AWS to GCP, Azure to AWS, and vice versa). If you have any existing tutorials on these topics, I would be grateful if you could point me in their direction.
Additionally, if you do not currently have tutorials covering these specific areas, I kindly request that you consider creating content on cloud migration in the future. Such tutorials would be incredibly beneficial for many professionals in the DevOps community, including myself.
Thank you once again for your dedication to sharing knowledge and contributing to the growth of the DevOps field. Your efforts are greatly appreciated.
Best regards,
Ibrahim
Crisp & perfect.
Thanks!
great explanation! thanks
thanks!
Hello,
I'm stuck in a situation where I scaled the Grafana replica set to 3 and surprisingly all three are running using the same PVC which is a single EBS of type gp2. I'm scratching my head how is it possible for a single EBS volume to get attached to three different pods? Please help me.
Details looks like this:
Access Modes - ReadWriteOnce
Storage Class Name - gp2
Storage - 10Gi
Pods - loki-grafana-5fd8756bb8-c2rcq loki-grafana-5fd8756bb8-hjfcn loki-grafana-5fd8756bb8-n2cwt
Status - Bound
I found the reason It was because all three pods were scheduled on same node and they can share the same EBS volume.
Hi, yes, it's technically ReadWriteOnce for the underlying node, not the pod. However, most of the time, you should never rely on that behavior except in special cases when you intentionally configure it using podAffinity.
Can you pls share a eks cluster must have if we try to build up as IAC using CDK or Terraform on AWS
You can use this tutorial and source code is in the description - ruclips.net/video/yduHaOj3XMg/видео.html
can i scale up a database StatefulSet horizontally when choosing a `ReadWriteMany` network storage ?
Most likely not. It also depends on the database you're using. Some databases have Kubernetes operators that can perform this task for you.
Cant we use rolling update strategy in deployment with 3 replicas and a pvc of ebs volume mounted
EBS volumes only support 'readWriteOnce', which means you can mount that volume to a single pod at any given time.
@@AntonPutra Hmmm I read about readWriteOnce means that the volume can be mounted in a single node, so, many pods can mounted if they're in the same node. Am I wrong?
@@breinerfranciscobatallacai8379 Yes and no. You can mount "readWriteOnce" volume to multiple pods only if pods are created on the same node, which is most of the time not the case and not practical (you can do it with podAffinity if you want).
sounds great, well read.
Hi Anton , i love your video, can you make a video of multicluster traffic routing for a sample application
Sure, in the future, I assume you want me to use one of the service meshes such as Istio or Linkerd?
@@AntonPutra istio
Do you have sample actual demo for this? Anyway you’ve done a great job!!
Sure, I have lots of examples in my GitHub repository. Are you interested in any specific ones?
github.com/antonputra/tutorials/blob/main/docs/contents.md
Brilliant
thanks!
theres something about the way u read the info which makes the video very intersting haha
😅
👍🫡
Thanks 😊