Patroni PostgreSQL - Cluster Setup

Поделиться
HTML-код
  • Опубликовано: 6 сен 2024

Комментарии • 32

  • @dem_linux9021
    @dem_linux9021  10 месяцев назад +3

    Added a small change to the ha proxy configuration:
    frontend patroni-prod
    mode tcp
    maxconn 5000
    bind *:5432
    default_backend patroni_servers
    backend patroni_servers
    mode tcp
    option httpchk
    http-check expect status 200
    default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
    server node1 10.10.0.181:5432 maxconn 100 check port 8008
    server node2 10.10.0.182:5432 maxconn 100 check port 8008
    This will allow you to talk to the databases by HA-PROXY-IP:5432, this means that applications would use the load-balancer IP to reach the databases. If not prefred then just use
    psql -U postgres -h load-balancer-ip -p 5000

  • @user-pr9it1oj3f
    @user-pr9it1oj3f 5 месяцев назад

    Beautiful explanation,
    Thank you sir.

  • @DejinKumar
    @DejinKumar 4 месяца назад

    Thanks for this helpful information.

  • @innlinellc7672
    @innlinellc7672 19 дней назад

    Hi dear, do I have to create these users admin, postgres and replicator manually or should Patron create it automatically?
    When I start Patroni on Ubuntu 22.04 .
    Do I also have to start postgresql.service and postgresql@12-main.service ???

    • @dem_linux9021
      @dem_linux9021  17 дней назад

      Hey, no need to create the users, and and new version of patroni will start postgres as well, so you only need patroni service, you can also check for pid and see if postgres starts with patroni service, if not then start postgres.

  • @f.5528
    @f.5528 4 месяца назад

    very interesting video. thank you

  • @funkiam9214
    @funkiam9214 6 месяцев назад

    to make things even better you can add pgbouncer to optimize connections

  • @yashkotari
    @yashkotari 10 месяцев назад +1

    How do we ensure our connections through HAProxy are not failing in case the HAProxy (Node4) goes down? Is there a way we can setup HA for HAProxy loadbalancing ?

    • @dem_linux9021
      @dem_linux9021  10 месяцев назад

      Hi yes you could use Keepalived to have HA on the HAProxy.

    • @yashkotari
      @yashkotari 10 месяцев назад

      @@dem_linux9021 I'm glad you responded almost immediately. Can't thank u enough.
      Any plans on making a video on it?
      I'm not a Linux expert but a HADR PostGreSQL is the need for hour in my environment.

    • @dem_linux9021
      @dem_linux9021  10 месяцев назад

      @@yashkotari hey, I can try to make a video with keepalived it's not that hard all you need is a conf on both ha proxy with a virtual IP,
      example conf file on ha proxy 1:
      vrrp_script chk_haproxy {
      script "/usr/bin/killall -0 haproxy"
      interval 2
      weight 2
      }
      vrrp_instance VI_1 {
      interface eth0
      virtual_router_id 51
      state MASTER
      priority 101
      virtual_ipaddress {
      10.10.0.150
      }
      track_script {
      chk_haproxy
      }
      }
      and on the ha proxy2:
      vrrp_script chk_haproxy {
      script "/usr/bin/killall -0 haproxy"
      interval 2
      weight 2
      }
      vrrp_instance VI_1 {
      interface eth0
      virtual_router_id 51
      state SLAVE
      priority 101
      virtual_ipaddress {
      10.10.0.150
      }
      track_script {
      chk_haproxy
      }
      }
      now you would use 10.10.0.150 as your entry to the ha proxy, so in this case maybe add a dns record ha-proxy.local > 10.10.0.150. This IP will then switch between ha proxy 1 and ha proxy 2 depending on the state. I will try to make a fast video about it this week :)

    • @dem_linux9021
      @dem_linux9021  10 месяцев назад

      Added how to install in my GitHub README page
      github.com/dem-linux/patroni-postgres/blob/main/README.md@@yashkotari

    • @alexander6925
      @alexander6925 2 месяца назад

      @@dem_linux9021 Hi brother, can you tell me if HA proxy is required for failover? Or can I just skip installing the HA Proxy and still if the master dies the replica takes its place?

  • @barkleybarkleyy4418
    @barkleybarkleyy4418 11 месяцев назад +1

    Hi, that guy. According to FHS, the /mnt directory is reserved for temporarily mounted file systems. You shouldn't be configuring databases there. /srv, /var /mypatronidb are all better choices. As for the insecure md5 algorithm, it's been ditched by Postgres and defaults to scram-sha-256, starting from v14 you use in this video. You can just change that in your configs and all will work.

  • @meetvora4914
    @meetvora4914 Год назад +1

    Hi, can you post a video on how to configure patroni with the citus database? I am having hard time setting it up with citus database.
    Thank you.

    • @dem_linux9021
      @dem_linux9021  Год назад +1

      Hi, I will take a look at it and try record a video

  • @user-re9uy6ei3d
    @user-re9uy6ei3d Год назад +1

    my leader node and replica node are not syncing .Can you please help

    • @dem_linux9021
      @dem_linux9021  Год назад

      Check etcd and compare the configuration

  • @emanfeah9960
    @emanfeah9960 Год назад

    thank you very much but i have a question, can i use one server for etcd and ha proxy , i see your use 4 servers ..

    • @dem_linux9021
      @dem_linux9021  Год назад

      Hi, you could run on 1 server both etcd and proxy but if your server dies you lose the whole point of High availability and etcd at the same time, so nothing that I would recommend.

  • @wahyucandratama2948
    @wahyucandratama2948 Год назад

    is that patroni.yml in node1 are same on node2?

    • @dem_linux9021
      @dem_linux9021  Год назад

      Hi, almost you just need to change the listen IP to the IP of the server.

  • @wahyucandratama2948
    @wahyucandratama2948 Год назад

    your tutorial is helpful for me, but can you show me how to access that cluster (node1 & node2) via node-etcd? so my apps just only need to access node-etcd ip to connect that clustered database

    • @dem_linux9021
      @dem_linux9021  Год назад

      Hi, you can just use ip to the load balancer to access the database that’s why I use haproxy it will automatically handle failover and point to the leader/master node.

    • @wahyucandratama2948
      @wahyucandratama2948 Год назад

      @@dem_linux9021 cool, my project was done, thanks for making this video 👌👌

  • @panthajan
    @panthajan 8 месяцев назад

    authentication:
    replication:
    username: replicator
    password: password
    superuser:
    username: postgres
    password: password
    Here, should I create this user & password manually after install postgres on all node?Is there any other configuration needed on postgres node?

    • @dem_linux9021
      @dem_linux9021  8 месяцев назад

      Hi, you only adjust that in patroni conf, this are example but u can change them as you wish.

  • @khansaif7684
    @khansaif7684 Месяц назад

    patroni cluster has no leader and it show like
    I'm setting up a Patroni cluster and facing an issue where both nodes remain in the 'stopped' state, and the logs indicate that the system is waiting for a leader to bootstrap. Here is the output when I run patronictl -c /etc/postgres0.yml list:
    Cluster: postgres (7366xxxxxxxxxxxxxxx) ---------+----+-----------+
    | Member | Host | Role | State | TL | Lag in MB |
    +-------------+----------------+---------+---------+----+-----------+
    | postgresql0 | xxx.xxx.xx.57 | Replica | stopped | | unknown |
    | postgresql1 | xxx.xxx.xx.129 | Replica | stopped | | unknown |
    +-------------+----------------+---------+---------+----+-----------
    Kindly help me to solve this issue

    • @dem_linux9021
      @dem_linux9021  Месяц назад

      This means that postgres is not started can you check syslog?

  • @code4func-kythuatlaptrinh859
    @code4func-kythuatlaptrinh859 Месяц назад

    I do the same you but aways meet error >> CRITICAL: system ID mismatch, node patrono-node2 belongs to a different cluster: 7396713246569953512 != 7396707261106546582

    • @dem_linux9021
      @dem_linux9021  23 дня назад

      Check the etcd, seems like you have more then one etcd and they are not in the cluster, remove one etcd and delete the cluster by deleting the datadir of patrini and reinit the cluster