Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
postgresql_replication_ha [2020/02/07 16:08] – [Failover] andonovj | postgresql_replication_ha [2024/11/09 19:13] (current) – andonovj | ||
---|---|---|---|
Line 473: | Line 473: | ||
- | ====Failover==== | + | =====Failover===== |
I was going to kill myself couple times, seriously. I was head banging for at least couple weeks because I Couldn' | I was going to kill myself couple times, seriously. I was head banging for at least couple weeks because I Couldn' | ||
With PgPool, you can either use postgresql replication to migrate you the data, OR Pgpool replication. By default, when you install pgpool, it will be the FIRST thing you install, HOWEVER in our case was the second, so there is a need of a little modification. | With PgPool, you can either use postgresql replication to migrate you the data, OR Pgpool replication. By default, when you install pgpool, it will be the FIRST thing you install, HOWEVER in our case was the second, so there is a need of a little modification. | ||
- | Please ENSURE the following parameters are set on the notes: | + | Please ENSURE the following parameters are set on the nodes: |
< | < | ||
Line 484: | Line 484: | ||
master_slave_sub_mode = ' | master_slave_sub_mode = ' | ||
</ | </ | ||
+ | |||
+ | And the following ones, turned off: | ||
+ | |||
+ | < | ||
+ | -bash-4.2$ cat pgpool.conf | grep replication | ||
+ | replication_mode = off | ||
+ | # Activate replication mode | ||
+ | # when in replication mode | ||
+ | # replication mode, specify table name to | ||
+ | |||
+ | </ | ||
+ | As the two settings are mutually exclusive. Only one can be active at a time. | ||
This indicates that you ALREADY have streaming replication and that you take care of it. | This indicates that you ALREADY have streaming replication and that you take care of it. | ||
+ | ====Current State==== | ||
After that, let's see our current state of the cluster: | After that, let's see our current state of the cluster: | ||
Line 503: | Line 516: | ||
That clearly states that the postgresqlmaster is the master and postgresqlslaveone is the slave :) I know, stupid naming but bare with me :) | That clearly states that the postgresqlmaster is the master and postgresqlslaveone is the slave :) I know, stupid naming but bare with me :) | ||
+ | ====Database Failover==== | ||
So what happens after I shutdown the first database: | So what happens after I shutdown the first database: | ||
Line 594: | Line 608: | ||
+ | |||
+ | ====PGPool Failover==== | ||
Now, on the slave (new master) you won't see anything on the Pgpool until you don't shutdown the pgpool too. Because usually when a master fails over, the entire server is dead :) | Now, on the slave (new master) you won't see anything on the Pgpool until you don't shutdown the pgpool too. Because usually when a master fails over, the entire server is dead :) | ||
Line 634: | Line 650: | ||
</ | </ | ||
+ | ====After failover==== | ||
+ | After all this is done, we can check the new status of the cluster :) | ||
+ | < | ||
+ | [root@postgresqlslaveone tmp]# pcp_watchdog_info -p 9898 -h 192.168.0.220 -U postgres | ||
+ | Password: | ||
+ | 2 YES postgresqlslaveone: | ||
+ | |||
+ | postgresqlslaveone: | ||
+ | postgresqlmaster: | ||
+ | [root@postgresqlslaveone tmp]# psql -h 192.168.0.220 -p 9999 -U postgres postgres -c "show pool_nodes" | ||
+ | Password for user postgres: | ||
+ | | ||
+ | ---------+--------------------+------+--------+-----------+---------+------------+-------------------+-------------------+-------------------+------------------------+--------------------- | ||
+ | | ||
+ | | ||
+ | (2 rows) | ||
+ | |||
+ | [root@postgresqlslaveone tmp]# | ||
+ | </ | ||
Line 1045: | Line 1080: | ||
logger -i -p local1.info follow_master.sh: | logger -i -p local1.info follow_master.sh: | ||
exit 0 | exit 0 | ||
+ | </ | ||
+ | |||
+ | |||
+ | =====Implemention with Kubernetes===== | ||
+ | To implement pgpool can be done either | ||
+ | 1) Via variables | ||
+ | 2) Configmaps | ||
+ | |||
+ | In this case, we will use a config map: | ||
+ | |||
+ | < | ||
+ | apiVersion: v1 | ||
+ | kind: ConfigMap | ||
+ | metadata: | ||
+ | name: pgpool-config | ||
+ | namespace: db-test | ||
+ | labels: | ||
+ | app: pgpool-config | ||
+ | data: | ||
+ | pgpool.conf: | ||
+ | listen_addresses = ' | ||
+ | port = 9999 | ||
+ | socket_dir = '/ | ||
+ | pcp_listen_addresses = ' | ||
+ | pcp_port = 9898 | ||
+ | pcp_socket_dir = '/ | ||
+ | backend_hostname0 = experience-db-cluster-alinma-rw | ||
+ | backend_port0 = 5432 | ||
+ | backend_weight0 = 1 | ||
+ | backend_flag0 = ' | ||
+ | backend_auth_method0 = ' | ||
+ | backend_password0 = ' | ||
+ | backend_hostname1 = experience-db-cluster-alinma-ro | ||
+ | backend_port1 = 5432 | ||
+ | backend_weight1 = 1 | ||
+ | backend_flag1 = ' | ||
+ | backend_password1 = ' | ||
+ | backend_auth_method1 = ' | ||
+ | backend_hostname2 = experience-db-cluster-alinma-ro | ||
+ | backend_port2 = 5432 | ||
+ | backend_weight2 = 2 | ||
+ | backend_flag2 = ' | ||
+ | backend_password2 = ' | ||
+ | backend_auth_method2 = ' | ||
+ | sr_check_user = ' | ||
+ | sr_check_password = ' | ||
+ | sr_check_period = 10 | ||
+ | enable_pool_hba = on | ||
+ | master_slave_mode = on | ||
+ | num_init_children = 32 | ||
+ | max_pool = 4 | ||
+ | child_life_time = 300 | ||
+ | child_max_connections = 0 | ||
+ | connection_life_time = 0 | ||
+ | client_idle_limit = 0 | ||
+ | connection_cache = on | ||
+ | load_balance_mode = on | ||
+ | PGPOOL_PCP_USER = ' | ||
+ | PGPOOL_PCP_PASSWORD = ' | ||
+ | pcp.conf: |- | ||
+ | experience_db: | ||
+ | pool_passwd: | ||
+ | experience_db: | ||
+ | pool_hba.conf: | ||
+ | local | ||
+ | host all | ||
+ | host all | ||
+ | host all | ||
+ | </ | ||
+ | |||
+ | |||
+ | After we create that configmap with: | ||
+ | |||
+ | < | ||
+ | kk apply -f configmap.yaml | ||
+ | </ | ||
+ | |||
+ | We can create the deployment and the service now: | ||
+ | |||
+ | < | ||
+ | apiVersion: apps/v1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: pgpool | ||
+ | spec: | ||
+ | replicas: 3 | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: pgpool | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: pgpool | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: pgpool | ||
+ | image: pgpool/ | ||
+ | env: | ||
+ | - name: POSTGRES_USERNAME | ||
+ | value: " | ||
+ | - name: POSTGRES_PASSWORD | ||
+ | value: " | ||
+ | - name: PGPOOL_PASSWORD_ENCRYPTION_METHOD | ||
+ | value: " | ||
+ | - name: PGPOOL_ENABLE_POOL_PASSWD | ||
+ | value: " | ||
+ | - name: PGPOOL_SKIP_PASSWORD_ENCRYPTION | ||
+ | value: " | ||
+ | # The following settings are not required when not using the Pgpool-II PCP command. | ||
+ | # To enable the following settings, you must define a secret that stores the PCP user's | ||
+ | # username and password. | ||
+ | #- name: PGPOOL_PCP_USER | ||
+ | # valueFrom: | ||
+ | # secretKeyRef: | ||
+ | # name: pgpool-pcp-secret | ||
+ | # key: username | ||
+ | #- name: PGPOOL_PCP_PASSWORD | ||
+ | # valueFrom: | ||
+ | # secretKeyRef: | ||
+ | # name: pgpool-pcp-secret | ||
+ | # key: password | ||
+ | volumeMounts: | ||
+ | - name: pgpool-config | ||
+ | mountPath: /config | ||
+ | #- name: pgpool-tls | ||
+ | # mountPath: /config/tls | ||
+ | volumes: | ||
+ | - name: pgpool-config | ||
+ | configMap: | ||
+ | name: pgpool-config | ||
+ | # Configure your own TLS certificate. | ||
+ | # If not set, Pgpool-II will automatically generate the TLS certificate if ssl = on. | ||
+ | #- name: pgpool-tls | ||
+ | # secret: | ||
+ | # secretName: pgpool-tls | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: Service | ||
+ | metadata: | ||
+ | name: pgpool | ||
+ | spec: | ||
+ | selector: | ||
+ | app: pgpool | ||
+ | ports: | ||
+ | - name: pgpool-port | ||
+ | protocol: TCP | ||
+ | port: 9999 | ||
+ | targetPort: 9999 | ||
</ | </ |