@@TechnoTim I found the video because I was searching for a production ready environment. RUclips notified nothing about the new video release, as always.
I'd give a few props to the Zalando team who created Patroni way back now. As you cited, many ways to do the cert thing, but to slicken up the process, maybe bake into your docs a note or two about creating a StepCA instance (and enable ACME on it), and then just use 'certbot' on the HAproxy, etcd, and Postgres server nodes to allow the pull/refresh of needed certs automatically. The StepCA client auto-adds the rootCA of the StepCA server you're talking to into the local trusted cert store, and that prevent all the squawking about untrusted CA this/that/the-other. Great vid.
That’s a lot better solution than manually creating and managing certificate, but why run certbot on every node? Store your certificates in Hashicorp vault and use consul-template to deploy them, simpler, less moving parts and more elegant solution.
@@menruletheworld - lots of good choices just as you say. Whether Vault or OpenBao and their CA backend are all viable. I'll +1 Tim's work talking through the OpenSSL's .cnf config options, however, because - though a little lumpy - it's always useful knowledge.
I went backwards. Started with Zalando in Kube. Way too old and is good for k8s only. then gave Crunchy a try.; not good for production.. then thought about streight Patroni, found hudreds of people doing their own way. I will give Tim`s a try... seems very complete and straight forward.Thanks buddy.
Amazing and very timely for me! Thank you! Heads up: The Video notes URL in the description has a typo and returns 404 currently. Had to go directly to your root address and get to it from the homepage.
Finally someone wich doesn't do only hello world tutorials. If I had this content when I was starting I would have skipped many headaches. Please provide more content like this 🙏
Great video ! I hope you'll have the time one day and make a video demonstrating how to include Traefik Proxy in our Docker Compose file and use it in our architecture with PostgreSQL and pgAdmin service containers. I really learned too much from your course.
What a great video ! And what great timing. Because, guess what I was going to be tasked with at work. Thank you!!!!! I would like to give you a huge thanks, for just how much hard work you have put into all of this, and the coding done on the website.
I am super excited you are sharing this with us. I am also super excited for your next hardware update, not that you have been living with your co-lo for a while.
thank you, just at the right time while my company is about to transition from a patroni-managed pg cluster in k8s towards managed postgres hosting - as i need to setup a test-system in-house for our development team ❣
Great Video! Some stuff to add for keepalived. It usaly use multicast, the traffic will be send from the interface in the config. With vrrp3 you can swithc to unicast and then need to define a unicast source and the destionations. Also in the vrrp VIP you could add more than one ip and after the ip add the interface the ip is added to. The unicast checks are done by the interface defined in the top. This is very important for overlay networks since multicast can generate there much traffic.
Learned a lot about HA in general with this tutorial. Please please _please_ do one for ElasticSearch, because for the life of me I cannot figure out how to self-host an ES cluster for longer then 2-3 days before it grinds to a screeching halt, despite claiming everything is healthy.
Man i got all excited when you said stacks, then poof, FU it’s vm’s. :) really nice breakdown. I look forward to watching your brain melt when you do this under kube.
@@TechnoTim haha; you still need to add pgbouncer and pgpools as well. I think percona did a video presentation on the kube stuff. I’ve not watched it yet but should be current. Great presentation btw, pretty complete but way faster than most people get it done.
It is better to have separate LBs for primary and secondary nodes. Usually applications would have separate connection pools for master and replicas, so read only transactions would go to replicas while read/write will go master. Great guide nevertheless!
Thank you! I'll check it out in the future! I am a fan of this because complex reads can be offloaded to replicas. I think both have their pros and cons but will look into this config too!
My eternal dilema leveled up to another level: docker container or hosted at the SO as showed by you? I would like to know how you set backups for this kind of environment of the video. Thank you very much, Tim!
1. How does it handle transactions? 2. How do i remove or add node into the cluster? 3. How does it handle wal archiving? Is there any way I can restore point in time?
Ah yes "the hard way" of doing things is still the most fun sometimes. Getting your hands dirty with config files while any change gets applied immediately to production is wild xD but I have to do it in some old legacy systems we have at work so it's always a game of "did I ruin everything just now or did I fix it?" :D
Great question! In general you probably should, I just didn't set up DNS for this cluster, which looking back I should have which would also illustrate better why having a SAN on the cert helps!
This! Also, having more nodes is good for 'production' sites, but you'll always need an odd number of nodes, I've seen some sites do 5 or 7 nodes, just because of the spread across availability zones
For sure, I talked about quorum quite a bit and there is a minimum of 3 nodes. When you add more nodes, you also install and configure etcd which means etcd will scale with postgres. Otherwise, you can host etcd youself. All explained in here.
Hey Tim, While watching your video a bulb sparked and wondered the following: Since you're configuring etcd in HA to be able to talk to each other host, how can this be used to configure a K3S cluster without using Ansible? Any way we can get an updated video of K3S since its bee a few years?
@@abetechtips hi! I am not aware of a way to connect to an external etcd with k3s however it does support external mysql. I have videos on both without Ansible!
Hello guys, Trying to follow here :) Always have hard time understanding when it comes to certificates... Here the etcd nodes got certificates signed by local CA and this local CA will have to be in the trusted certs of etcd ? And for the postgres server certificate, we create only one cert but self-signed (without any CA involved) ? Thank for the free share of knowledge as usual !
LOL @ how the title and dialog both have "Postgres SQL" (pronounced Post Gres See Quell), when it actually has one less 'S' than that, PostgreSQL, pronounced Post-GreS-Queue-Ell, or simply PostgreS (with the 'QL' silent). It's a silly thing we've been (not really, but sometimes) arguing about in nerd-fight fashion for the last 35 years, it's even in the PostgreSQL FAQ.
That is why I prefer pacemaker way, I dont need haproxy for managing vip address. Of course I thinkt that patroni is still a great tool but still pacemaker wins for HA in PostgreSQL
Great video! A lot of moving parts (so a bit intimidating) :o If I already have HAproxy running on a pfSense box it will double as a LB. Can you use the health checks there to achieve the same/similar thing (using the patroni API to get the master)? Pro/cons?
Do I get the correct impression that you basically can use this HA cluster config with way more stuff to cluster than "just" PostgreSQL? Would be interesting to automate that also for other stuff. 🤔
Great video. Yeah, don't bring the ludicrous KubeCtl pronunciations (even if a commiter says it is cuttle, as in fish?) to the rest of the sane world ;-) ctl = ConTroL , Systemctl)(ConTroL!), timedatectl)(ConTroL!), journnal(ctl)(ConTroL!), etcd(ctl)(ConTroL!), the only confusion is in Kubernetes, the rest of the world is fine with "control" ;-)
I've since moved postgres out of my k8s cluster due to performance and continued issues with k8s storage (pods sometimes do not reconnect to their storage and takes manual intervention). Neither are inherent issues with postgres or cpng however it still resulted in more issues and downtime than hosted in VMs/LXC (0 downtime so far in over a month). I agree, if I do go back to k8s, a dedicated cluster with a dedicated node pool is the way to go to avoid performance issues you get with shared environments, but that still doesn't address the bigger issue I had that was PVCs hanging or not reconnecting to pods.
I am currently trying to migrate our production environment to kubernetes. Still testing at the moment. And I would also like to have postgresql in Kubernetes via an operator like cnpg, stackgres or zalando. Would you advise against something like that?
@@TechnoTim Were you experiencing those issues with k3s/rke2 or on native k8s? I have run into PVC mounting issues on rke2 clusters on Harvester HCI before which is why I moved away at some point from Harvester and rke2/k3s.
Missed opportunity to call this video "installing Postgres the hard way" as a throwback to Kubernetes install tutorials
That was 100% going to be the title but I didn't want to scare anyone!
OK, I decided to rename it. Thanks for the nudge!
@@TechnoTim I prefer the hard way, as it teaches a lot. For a tutorial is perfect
@@TechnoTim I found the video because I was searching for a production ready environment. RUclips notified nothing about the new video release, as always.
Awesome video, I'll definitely be checking this out in my lab.
I'd give a few props to the Zalando team who created Patroni way back now. As you cited, many ways to do the cert thing, but to slicken up the process, maybe bake into your docs a note or two about creating a StepCA instance (and enable ACME on it), and then just use 'certbot' on the HAproxy, etcd, and Postgres server nodes to allow the pull/refresh of needed certs automatically. The StepCA client auto-adds the rootCA of the StepCA server you're talking to into the local trusted cert store, and that prevent all the squawking about untrusted CA this/that/the-other. Great vid.
That’s a lot better solution than manually creating and managing certificate, but why run certbot on every node? Store your certificates in Hashicorp vault and use consul-template to deploy them, simpler, less moving parts and more elegant solution.
@@menruletheworld - lots of good choices just as you say. Whether Vault or OpenBao and their CA backend are all viable. I'll +1 Tim's work talking through the OpenSSL's .cnf config options, however, because - though a little lumpy - it's always useful knowledge.
I went backwards. Started with Zalando in Kube. Way too old and is good for k8s only. then gave Crunchy a try.; not good for production.. then thought about streight Patroni, found hudreds of people doing their own way. I will give Tim`s a try... seems very complete and straight forward.Thanks buddy.
Amazing and very timely for me! Thank you! Heads up: The Video notes URL in the description has a typo and returns 404 currently. Had to go directly to your root address and get to it from the homepage.
Thanks you! I changed the URL and you beat me to it. Just refresh and the link should be updated!
Finally someone wich doesn't do only hello world tutorials. If I had this content when I was starting I would have skipped many headaches. Please provide more content like this 🙏
Great video ! I hope you'll have the time one day and make a video demonstrating how to include Traefik Proxy in our Docker Compose file and use it in our architecture with PostgreSQL and pgAdmin service containers. I really learned too much from your course.
Fantastic video! I love to see how it run under the hook, not just a automate process.
What a great video ! And what great timing. Because, guess what I was going to be tasked with at work. Thank you!!!!!
I would like to give you a huge thanks, for just how much hard work you have put into all of this, and the coding done on the website.
I am super excited you are sharing this with us.
I am also super excited for your next hardware update, not that you have been living with your co-lo for a while.
Crunchy Data has a pretty good Kubernetes Operator that makes deployment easy
thank you, just at the right time while my company is about to transition from a patroni-managed pg cluster in k8s towards managed postgres hosting - as i need to setup a test-system in-house for our development team ❣
Can't wait for the automated version of this with IaC...
this is awesome content!!!
Great Video! Some stuff to add for keepalived. It usaly use multicast, the traffic will be send from the interface in the config. With vrrp3 you can swithc to unicast and then need to define a unicast source and the destionations. Also in the vrrp VIP you could add more than one ip and after the ip add the interface the ip is added to. The unicast checks are done by the interface defined in the top. This is very important for overlay networks since multicast can generate there much traffic.
This is an awesome rundown on how to install it "the hard way". This is the way. 🚀
I did this exact setup a while back. Was super tough. Good luck to everyone, probably going to be easier for you all with this video :)
Great video! I would love to see similar content for MariaDB severs.
I hate that this is posted right now, right as I am actively planning an application that will be relying on Postgres...absolutely worst timing!! :D
Learned a lot about HA in general with this tutorial. Please please _please_ do one for ElasticSearch, because for the life of me I cannot figure out how to self-host an ES cluster for longer then 2-3 days before it grinds to a screeching halt, despite claiming everything is healthy.
Really great tutorial. Would really love a follow-up video for Kubernetes.
ClusterSSH would a real asset for videos like this lol
Awesome!
You need a coffee after this.
@@MrEtoel thank you!!!
This video makes me feel even better about using ClickHouse instead of Postgres or CockroachDB
Well, I was wondering about "enterprisify" my kubernetes cluster with a distributed database. That's... coming at the right time!
Next video, a follow up on how to do it on kubernetes. Two steps, install Operator, let operator deploy HA setup 🤣
Man i got all excited when you said stacks, then poof, FU it’s vm’s. :) really nice breakdown. I look forward to watching your brain melt when you do this under kube.
@@ClayBellBrews already did it and switched back 😅
@@TechnoTim haha; you still need to add pgbouncer and pgpools as well. I think percona did a video presentation on the kube stuff. I’ve not watched it yet but should be current. Great presentation btw, pretty complete but way faster than most people get it done.
It is better to have separate LBs for primary and secondary nodes. Usually applications would have separate connection pools for master and replicas, so read only transactions would go to replicas while read/write will go master. Great guide nevertheless!
Awesome! Please make content on MySQL InnoDB cluster too
Awesome Tutorial!
Fantastic job, Tim. Earned you a sub and 👍
Awesome, thank you!
Great video!
Postgres 17 supports node-based replication. A multi- master approach would be more robust and performant.
Thank you! I'll check it out in the future! I am a fan of this because complex reads can be offloaded to replicas. I think both have their pros and cons but will look into this config too!
A Kubernetes "the hard way" would be nice
@40:10 - could you put the root-ca cert on the proxy VM to verify the PostgreSQL certificates?
Yes, you absolutley could!
My eternal dilema leveled up to another level: docker container or hosted at the SO as showed by you? I would like to know how you set backups for this kind of environment of the video. Thank you very much, Tim!
1. How does it handle transactions?
2. How do i remove or add node into the cluster?
3. How does it handle wal archiving? Is there any way I can restore point in time?
i wanted to set up a "simple" posgtres replication for quite some time... i cannot express how overkill your solution feels :D
that was awesome!
Ah yes "the hard way" of doing things is still the most fun sometimes. Getting your hands dirty with config files while any change gets applied immediately to production is wild xD but I have to do it in some old legacy systems we have at work so it's always a game of "did I ruin everything just now or did I fix it?" :D
@16:00 - why did you not use the node fqdn in the cert?
Great question! In general you probably should, I just didn't set up DNS for this cluster, which looking back I should have which would also illustrate better why having a SAN on the cert helps!
@5:50 - can all these be setup on only 3 VM?
can't the proxy & keepalive be on the same SQL VM?
30:21 why patroni need to have etc's private key?
Etcd needs HA for its own three nodes to have quorum. So running etcd on the same nodes as postgresql would not be advisable.
This! Also, having more nodes is good for 'production' sites, but you'll always need an odd number of nodes, I've seen some sites do 5 or 7 nodes, just because of the spread across availability zones
For sure, I talked about quorum quite a bit and there is a minimum of 3 nodes. When you add more nodes, you also install and configure etcd which means etcd will scale with postgres. Otherwise, you can host etcd youself. All explained in here.
Thanks for the useful tutorial! btw, I just noticed a typo "HA Porxy" in the description
Thank you! Good catch, I just fixed that!
Hey Tim, While watching your video a bulb sparked and wondered the following:
Since you're configuring etcd in HA to be able to talk to each other host, how can this be used to configure a K3S cluster without using Ansible?
Any way we can get an updated video of K3S since its bee a few years?
@@abetechtips hi! I am not aware of a way to connect to an external etcd with k3s however it does support external mysql. I have videos on both without Ansible!
@ right, like you mentioned in your ansible video, who wants another VM to maintain 🥲
Hello guys,
Trying to follow here :)
Always have hard time understanding when it comes to certificates... Here the etcd nodes got certificates signed by local CA and this local CA will have to be in the trusted certs of etcd ? And for the postgres server certificate, we create only one cert but self-signed (without any CA involved) ?
Thank for the free share of knowledge as usual !
LOL @ how the title and dialog both have "Postgres SQL" (pronounced Post Gres See Quell), when it actually has one less 'S' than that, PostgreSQL, pronounced Post-GreS-Queue-Ell, or simply PostgreS (with the 'QL' silent).
It's a silly thing we've been (not really, but sometimes) arguing about in nerd-fight fashion for the last 35 years, it's even in the PostgreSQL FAQ.
Awesome video Tim! I might have missed it in this video or a previous one, but what are the specs of those xing proxmox nodes you used?
Hey! They are Intel NUCs 11th gen i7 with 64GB of RAM.
@TechnoTim great, found the details of those on your recommended hardware page, thanks!
What would be the added value of having a HA postgres cluster compared to a HA VM on which Postgres is installed ?
That is why I prefer pacemaker way, I dont need haproxy for managing vip address. Of course I thinkt that patroni is still a great tool but still pacemaker wins for HA in PostgreSQL
Great video! A lot of moving parts (so a bit intimidating) :o If I already have HAproxy running on a pfSense box it will double as a LB. Can you use the health checks there to achieve the same/similar thing (using the patroni API to get the master)? Pro/cons?
54 mins! 😂 Holy baby Jesus!
Do I get the correct impression that you basically can use this HA cluster config with way more stuff to cluster than "just" PostgreSQL? Would be interesting to automate that also for other stuff. 🤔
Great video! Can you do it for mongodb ?
Will not be possible and easy to do it with kubernetes, instead 6 machine host?
Was wondering, can you do it with MariaDB? WordPress site?
Galera is what you are looking for with MariaDB
like before watching!
Wait, in which machine should I generate the certificates?
THIS MACHINE. 😂. I had to make it very clear.
can you know how to do this using microsoft SQL server for linux/docker ?
What about Read replicas?
Great video. Yeah, don't bring the ludicrous KubeCtl pronunciations (even if a commiter says it is cuttle, as in fish?) to the rest of the sane world ;-) ctl = ConTroL , Systemctl)(ConTroL!), timedatectl)(ConTroL!), journnal(ctl)(ConTroL!), etcd(ctl)(ConTroL!), the only confusion is in Kubernetes, the rest of the world is fine with "control" ;-)
I am a fan of saying "control" for ctl :). I never know how things cross over from k8s pronounciations :)
What about ClouNativePG?
open relational😂
Just one thing, if possible please use dark mode for explaining diagrams.
PS. Amazing video, Thanks for sharing.
Citus Data in Docker is way to go...
I'd rather take a small 3-node Kubernetes cluster and put cnpg in there.
I've since moved postgres out of my k8s cluster due to performance and continued issues with k8s storage (pods sometimes do not reconnect to their storage and takes manual intervention). Neither are inherent issues with postgres or cpng however it still resulted in more issues and downtime than hosted in VMs/LXC (0 downtime so far in over a month). I agree, if I do go back to k8s, a dedicated cluster with a dedicated node pool is the way to go to avoid performance issues you get with shared environments, but that still doesn't address the bigger issue I had that was PVCs hanging or not reconnecting to pods.
I am currently trying to migrate our production environment to kubernetes. Still testing at the moment. And I would also like to have postgresql in Kubernetes via an operator like cnpg, stackgres or zalando. Would you advise against something like that?
@@ralumbur See my comment above for some of my issues and how I would do it different in the future.
@@TechnoTim Were you experiencing those issues with k3s/rke2 or on native k8s? I have run into PVC mounting issues on rke2 clusters on Harvester HCI before which is why I moved away at some point from Harvester and rke2/k3s.
@@TechnoTim Were you using local storage or network storage? I am pretty sure CNPG recommends local storage given the HA is covered by replicas.