Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
postgresql_replication_ha [2020/01/28 12:30] – andonovj | postgresql_replication_ha [2024/11/09 19:13] (current) – andonovj | ||
---|---|---|---|
Line 44: | Line 44: | ||
* libmemcached | * libmemcached | ||
- | <sxh bash> | + | <Code:none| Download required Libraries> |
Using username " | Using username " | ||
Last login: Thu Jan 23 10:06:12 2020 | Last login: Thu Jan 23 10:06:12 2020 | ||
Line 72: | Line 72: | ||
[root@postgresqlslaveone ~]# | [root@postgresqlslaveone ~]# | ||
- | </sxh> | + | </Code> |
After that we can install them | After that we can install them | ||
=====Install packages===== | =====Install packages===== | ||
- | <sxh bash> | + | <Code:none | Install> |
[root@postgresqlslaveone ~]# yum install libevent | [root@postgresqlslaveone ~]# yum install libevent | ||
Loaded plugins: fastestmirror, | Loaded plugins: fastestmirror, | ||
Line 138: | Line 138: | ||
postgres ALL=NOPASSWD: | postgres ALL=NOPASSWD: | ||
[root@postgresqlslaveone ~]# | [root@postgresqlslaveone ~]# | ||
- | </sxh> | + | </Code> |
After that we could finally install the PGPool as you can see above. | After that we could finally install the PGPool as you can see above. | ||
Line 472: | Line 472: | ||
</ | </ | ||
+ | |||
+ | =====Failover===== | ||
+ | I was going to kill myself couple times, seriously. I was head banging for at least couple weeks because I Couldn' | ||
+ | |||
+ | With PgPool, you can either use postgresql replication to migrate you the data, OR Pgpool replication. By default, when you install pgpool, it will be the FIRST thing you install, HOWEVER in our case was the second, so there is a need of a little modification. | ||
+ | |||
+ | Please ENSURE the following parameters are set on the nodes: | ||
+ | |||
+ | < | ||
+ | master_slave_mode = on | ||
+ | master_slave_sub_mode = ' | ||
+ | </ | ||
+ | |||
+ | And the following ones, turned off: | ||
+ | |||
+ | < | ||
+ | -bash-4.2$ cat pgpool.conf | grep replication | ||
+ | replication_mode = off | ||
+ | # Activate replication mode | ||
+ | # when in replication mode | ||
+ | # replication mode, specify table name to | ||
+ | |||
+ | </ | ||
+ | As the two settings are mutually exclusive. Only one can be active at a time. | ||
+ | |||
+ | This indicates that you ALREADY have streaming replication and that you take care of it. | ||
+ | |||
+ | ====Current State==== | ||
+ | After that, let's see our current state of the cluster: | ||
+ | |||
+ | < | ||
+ | [root@postgresqlslaveone tmp]# psql -h 192.168.0.220 -p 9999 -U postgres postgres -c "show pool_nodes" | ||
+ | Password for user postgres: | ||
+ | | ||
+ | ---------+--------------------+------+--------+-----------+---------+------------+-------------------+-------------------+-------------------+------------------------+--------------------- | ||
+ | | ||
+ | | ||
+ | (2 rows) | ||
+ | |||
+ | [root@postgresqlslaveone tmp] | ||
+ | </ | ||
+ | |||
+ | That clearly states that the postgresqlmaster is the master and postgresqlslaveone is the slave :) I know, stupid naming but bare with me :) | ||
+ | |||
+ | ====Database Failover==== | ||
+ | So what happens after I shutdown the first database: | ||
+ | |||
+ | < | ||
+ | [root@postgresqlmaster pgpool-II]# su - postgres | ||
+ | Last login: Fri Feb 7 10:01:53 EST 2020 on pts/3 | ||
+ | -bash-4.2$ / | ||
+ | waiting for server to shut down..... done | ||
+ | server stopped | ||
+ | -bash-4.2$ | ||
+ | </ | ||
+ | |||
+ | Well...what can happen, the database will go down of course :) | ||
+ | |||
+ | < | ||
+ | 2020-02-07 10: | ||
+ | 2020-02-07 10: | ||
+ | 2020-02-07 10: | ||
+ | 2020-02-07 10: | ||
+ | 2020-02-07 10: | ||
+ | </ | ||
+ | |||
+ | |||
+ | And even more shokingly the PGPool will FINALLY recognize it: | ||
+ | < | ||
+ | 10:15:09 postgresqlmaster pgpool[16840]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16840]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16802]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16802]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16802]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16802]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16802]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16802]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16840]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16840]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16800]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16802]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16802]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16802]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16800]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16800]: | ||
+ | Feb 7 10:15:09 postgresqlmaster pgpool[16800]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17112]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | l/10/data status | ||
+ | Feb 7 10:15:11 postgresqlmaster postgres[17113]: | ||
+ | </ | ||
+ | |||
+ | FINALLY, the failover script was executed after countless blood stains on the wall | ||
+ | |||
+ | |||
+ | You can see also that on the new master node's database: | ||
+ | |||
+ | < | ||
+ | 2020-01-28 14: | ||
+ | 2020-01-28 14: | ||
+ | cp: cannot stat ‘/ | ||
+ | cp: cannot stat ‘/ | ||
+ | 2020-01-28 14: | ||
+ | 2020-01-28 14: | ||
+ | cp: cannot stat ‘/ | ||
+ | 2020-01-28 14: | ||
+ | </ | ||
+ | |||
+ | |||
+ | |||
+ | ====PGPool Failover==== | ||
+ | Now, on the slave (new master) you won't see anything on the Pgpool until you don't shutdown the pgpool too. Because usually when a master fails over, the entire server is dead :) | ||
+ | |||
+ | < | ||
+ | [root@postgresqlmaster pgpool-II]# service pgpool stop | ||
+ | Redirecting to / | ||
+ | [root@postgresqlmaster pgpool-II]# | ||
+ | </ | ||
+ | |||
+ | Once you do, you will recieve the migrate of the VIP as well :) | ||
+ | < | ||
+ | Jan 28 15:33:47 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:33:51 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:33:52 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:33:52 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:33:52 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:33:52 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:33:56 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:33:56 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:33:56 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:33:56 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:33:56 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:33:56 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:33:56 postgresqlslaveone pgpool[29053]: | ||
+ | Jan 28 15:33:59 postgresqlslaveone pgpool[26021]: | ||
+ | Jan 28 15:33:59 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:33:59 postgresqlslaveone pgpool[26021]: | ||
+ | Jan 28 15:33:59 postgresqlslaveone pgpool[26021]: | ||
+ | Jan 28 15:34:00 postgresqlslaveone pgpool[29053]: | ||
+ | Jan 28 15:34:00 postgresqlslaveone pgpool[29053]: | ||
+ | Jan 28 15:34:00 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:34:16 postgresqlslaveone pgpool[26293]: | ||
+ | Jan 28 15:34:16 postgresqlslaveone pgpool[26293]: | ||
+ | Jan 28 15:34:19 postgresqlslaveone pgpool[25985]: | ||
+ | Jan 28 15:34:19 postgresqlslaveone pgpool[25985]: | ||
+ | Jan 28 15:34:19 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:34:19 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:34:19 postgresqlslaveone pgpool[25956]: | ||
+ | Jan 28 15:34:19 postgresqlslaveone pgpool[25956]: | ||
+ | </ | ||
+ | |||
+ | ====After failover==== | ||
+ | After all this is done, we can check the new status of the cluster :) | ||
+ | |||
+ | < | ||
+ | [root@postgresqlslaveone tmp]# pcp_watchdog_info -p 9898 -h 192.168.0.220 -U postgres | ||
+ | Password: | ||
+ | 2 YES postgresqlslaveone: | ||
+ | |||
+ | postgresqlslaveone: | ||
+ | postgresqlmaster: | ||
+ | [root@postgresqlslaveone tmp]# psql -h 192.168.0.220 -p 9999 -U postgres postgres -c "show pool_nodes" | ||
+ | Password for user postgres: | ||
+ | | ||
+ | ---------+--------------------+------+--------+-----------+---------+------------+-------------------+-------------------+-------------------+------------------------+--------------------- | ||
+ | | ||
+ | | ||
+ | (2 rows) | ||
+ | |||
+ | [root@postgresqlslaveone tmp]# | ||
+ | </ | ||
+ | |||
+ | |||
+ | ====Verify==== | ||
After executing pcp_recovery_node command, vertify that postgresqlslaveone | After executing pcp_recovery_node command, vertify that postgresqlslaveone | ||
- | < | + | < |
# psql -h 192.168.0.220 -p 9999 -U postgres postgres -c "show pool_nodes" | # psql -h 192.168.0.220 -p 9999 -U postgres postgres -c "show pool_nodes" | ||
Password for user postgres | Password for user postgres | ||
Line 485: | Line 686: | ||
</ | </ | ||
+ | You can also verify the Watchdog deamon as follows: | ||
+ | |||
+ | < | ||
+ | [root@postgresqlmaster ~]# pcp_watchdog_info -p 9898 -h 192.168.0.220 -U postgres | ||
+ | Password: | ||
+ | 3 YES postgresqlmaster: | ||
+ | |||
+ | postgresqlmaster: | ||
+ | postgresqlslaveone: | ||
+ | postgresqlslavetwo: | ||
+ | [root@postgresqlmaster ~]# | ||
+ | </ | ||
=====Appendix===== | =====Appendix===== | ||
< | < | ||
Line 501: | Line 714: | ||
PRIMARY_NODE_HOST=$(hostname) | PRIMARY_NODE_HOST=$(hostname) | ||
- | PGHOME=/ | + | PGHOME=/ |
ARCHIVEDIR=/ | ARCHIVEDIR=/ | ||
REPLUSER=repl | REPLUSER=repl | ||
Line 580: | Line 793: | ||
exec > >(logger -i -p local1.info) 2>&1 | exec > >(logger -i -p local1.info) 2>&1 | ||
- | PGHOME=/ | + | PGHOME=/ |
DEST_NODE_HOST=" | DEST_NODE_HOST=" | ||
DEST_NODE_PGDATA=" | DEST_NODE_PGDATA=" | ||
Line 607: | Line 820: | ||
logger -i -p local1.info pgpool_remote_start: | logger -i -p local1.info pgpool_remote_start: | ||
exit 0 | exit 0 | ||
+ | </ | ||
+ | |||
+ | |||
+ | < | ||
+ | #!/bin/bash | ||
+ | # This script is run by failover_command. | ||
+ | |||
+ | set -o xtrace | ||
+ | exec > >(logger -i -p local1.info) 2>&1 | ||
+ | |||
+ | # Special values: | ||
+ | # %d = failed node id | ||
+ | # %h = failed node hostname | ||
+ | # %p = failed node port number | ||
+ | # %D = failed node database cluster path | ||
+ | # %m = new master node id | ||
+ | # %H = new master node hostname | ||
+ | # %M = old master node id | ||
+ | # %P = old primary node id | ||
+ | # %r = new master port number | ||
+ | # %R = new master database cluster path | ||
+ | # %N = old primary node hostname | ||
+ | # %S = old primary node port number | ||
+ | # %% = ' | ||
+ | |||
+ | FAILED_NODE_ID=" | ||
+ | FAILED_NODE_HOST=" | ||
+ | FAILED_NODE_PORT=" | ||
+ | FAILED_NODE_PGDATA=" | ||
+ | NEW_MASTER_NODE_ID=" | ||
+ | NEW_MASTER_NODE_HOST=" | ||
+ | OLD_MASTER_NODE_ID=" | ||
+ | OLD_PRIMARY_NODE_ID=" | ||
+ | NEW_MASTER_NODE_PORT=" | ||
+ | NEW_MASTER_NODE_PGDATA=" | ||
+ | OLD_PRIMARY_NODE_HOST=" | ||
+ | OLD_PRIMARY_NODE_PORT=" | ||
+ | |||
+ | PGHOME=/ | ||
+ | |||
+ | |||
+ | logger -i -p local1.info failover.sh: | ||
+ | |||
+ | ## If there' | ||
+ | if [ $NEW_MASTER_NODE_ID -lt 0 ]; then | ||
+ | logger -i -p local1.info failover.sh: | ||
+ | exit 0 | ||
+ | fi | ||
+ | |||
+ | ## Test passwrodless SSH | ||
+ | ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/ | ||
+ | |||
+ | if [ $? -ne 0 ]; then | ||
+ | logger -i -p local1.info failover.sh: | ||
+ | exit 1 | ||
+ | fi | ||
+ | |||
+ | ## If Standby node is down, skip failover. | ||
+ | if [ $FAILED_NODE_ID -ne $OLD_PRIMARY_NODE_ID ]; then | ||
+ | logger -i -p local1.info failover.sh: | ||
+ | |||
+ | ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/ | ||
+ | ${PGHOME}/ | ||
+ | " | ||
+ | |||
+ | if [ $? -ne 0 ]; then | ||
+ | logger -i -p local1.error failover.sh: | ||
+ | exit 1 | ||
+ | fi | ||
+ | |||
+ | exit 0 | ||
+ | fi | ||
+ | |||
+ | ## Promote Standby node. | ||
+ | logger -i -p local1.info failover.sh: | ||
+ | |||
+ | ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/ | ||
+ | postgres@${NEW_MASTER_NODE_HOST} -i ~/ | ||
+ | |||
+ | if [ $? -ne 0 ]; then | ||
+ | logger -i -p local1.error failover.sh: | ||
+ | exit 1 | ||
+ | fi | ||
+ | |||
+ | logger -i -p local1.info failover.sh: | ||
+ | exit 0 | ||
+ | </ | ||
+ | |||
+ | < | ||
+ | #!/bin/bash | ||
+ | # This script is run after failover_command to synchronize the Standby with the new Primary. | ||
+ | # First try pg_rewind. If pg_rewind failed, use pg_basebackup. | ||
+ | |||
+ | set -o xtrace | ||
+ | exec > >(logger -i -p local1.info) 2>&1 | ||
+ | |||
+ | # Special values: | ||
+ | # %d = failed node id | ||
+ | # %h = failed node hostname | ||
+ | # %p = failed node port number | ||
+ | # %D = failed node database cluster path | ||
+ | # %m = new master node id | ||
+ | # %H = new master node hostname | ||
+ | # %M = old master node id | ||
+ | # %P = old primary node id | ||
+ | # %r = new master port number | ||
+ | # %R = new master database cluster path | ||
+ | # %N = old primary node hostname | ||
+ | # %S = old primary node port number | ||
+ | # %% = ' | ||
+ | |||
+ | FAILED_NODE_ID=" | ||
+ | FAILED_NODE_HOST=" | ||
+ | FAILED_NODE_PORT=" | ||
+ | FAILED_NODE_PGDATA=" | ||
+ | NEW_MASTER_NODE_ID=" | ||
+ | OLD_MASTER_NODE_ID=" | ||
+ | NEW_MASTER_NODE_HOST=" | ||
+ | OLD_PRIMARY_NODE_ID=" | ||
+ | NEW_MASTER_NODE_PORT=" | ||
+ | NEW_MASTER_NODE_PGDATA=" | ||
+ | |||
+ | PGHOME=/ | ||
+ | ARCHIVEDIR=/ | ||
+ | REPLUSER=repl | ||
+ | PCP_USER=pgpool | ||
+ | PGPOOL_PATH=/ | ||
+ | PCP_PORT=9898 | ||
+ | |||
+ | logger -i -p local1.info follow_master.sh: | ||
+ | |||
+ | ## Test passwrodless SSH | ||
+ | ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/ | ||
+ | |||
+ | if [ $? -ne 0 ]; then | ||
+ | logger -i -p local1.info follow_master.sh: | ||
+ | exit 1 | ||
+ | fi | ||
+ | |||
+ | ## Get PostgreSQL major version | ||
+ | PGVERSION=`${PGHOME}/ | ||
+ | |||
+ | if [ $PGVERSION -ge 12 ]; then | ||
+ | RECOVERYCONF=${FAILED_NODE_PGDATA}/ | ||
+ | else | ||
+ | RECOVERYCONF=${FAILED_NODE_PGDATA}/ | ||
+ | fi | ||
+ | |||
+ | ## Check the status of Standby | ||
+ | ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/ | ||
+ | postgres@${FAILED_NODE_HOST} -i ~/ | ||
+ | |||
+ | |||
+ | ## If Standby is running, synchronize it with the new Primary. | ||
+ | if [ $? -eq 0 ]; then | ||
+ | |||
+ | logger -i -p local1.info follow_master.sh: | ||
+ | |||
+ | # Create replication slot " | ||
+ | ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/ | ||
+ | ${PGHOME}/ | ||
+ | " | ||
+ | |||
+ | ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/ | ||
+ | |||
+ | set -o errexit | ||
+ | |||
+ | ${PGHOME}/ | ||
+ | |||
+ | cat > ${RECOVERYCONF} << EOT | ||
+ | primary_conninfo = ' | ||
+ | recovery_target_timeline = ' | ||
+ | restore_command = 'scp ${NEW_MASTER_NODE_HOST}: | ||
+ | primary_slot_name = ' | ||
+ | EOT | ||
+ | |||
+ | if [ ${PGVERSION} -ge 12 ]; then | ||
+ | touch ${FAILED_NODE_PGDATA}/ | ||
+ | else | ||
+ | echo \" | ||
+ | fi | ||
+ | |||
+ | ${PGHOME}/ | ||
+ | |||
+ | " | ||
+ | |||
+ | if [ $? -ne 0 ]; then | ||
+ | logger -i -p local1.error follow_master.sh: | ||
+ | |||
+ | ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/ | ||
+ | |||
+ | set -o errexit | ||
+ | |||
+ | # Execute pg_basebackup | ||
+ | rm -rf ${FAILED_NODE_PGDATA} | ||
+ | rm -rf ${ARCHIVEDIR}/ | ||
+ | ${PGHOME}/ | ||
+ | |||
+ | if [ ${PGVERSION} -ge 12 ]; then | ||
+ | sed -i -e \" | ||
+ | -e \"/ | ||
+ | fi | ||
+ | |||
+ | cat > ${RECOVERYCONF} << EOT | ||
+ | primary_conninfo = ' | ||
+ | recovery_target_timeline = ' | ||
+ | restore_command = 'scp ${NEW_MASTER_NODE_HOST}: | ||
+ | primary_slot_name = ' | ||
+ | EOT | ||
+ | |||
+ | if [ ${PGVERSION} -ge 12 ]; then | ||
+ | touch ${FAILED_NODE_PGDATA}/ | ||
+ | else | ||
+ | echo \" | ||
+ | fi | ||
+ | " | ||
+ | |||
+ | if [ $? -ne 0 ]; then | ||
+ | # drop replication slot | ||
+ | ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/ | ||
+ | ${PGHOME}/ | ||
+ | " | ||
+ | |||
+ | logger -i -p local1.error follow_master.sh: | ||
+ | exit 1 | ||
+ | fi | ||
+ | fi | ||
+ | |||
+ | # start Standby node on ${FAILED_NODE_HOST} | ||
+ | ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/ | ||
+ | postgres@${FAILED_NODE_HOST} -i ~/ | ||
+ | |||
+ | # If start Standby successfully, | ||
+ | if [ $? -eq 0 ]; then | ||
+ | |||
+ | # Run pcp_attact_node to attach Standby node to Pgpool-II. | ||
+ | ${PGPOOL_PATH}/ | ||
+ | |||
+ | if [ $? -ne 0 ]; then | ||
+ | logger -i -p local1.error follow_master.sh: | ||
+ | exit 1 | ||
+ | fi | ||
+ | |||
+ | # If start Standby failed, drop replication slot " | ||
+ | else | ||
+ | |||
+ | ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/ | ||
+ | ${PGHOME}/ | ||
+ | |||
+ | logger -i -p local1.error follow_master.sh: | ||
+ | exit 1 | ||
+ | fi | ||
+ | |||
+ | else | ||
+ | logger -i -p local1.info follow_master.sh: | ||
+ | exit 0 | ||
+ | fi | ||
+ | |||
+ | logger -i -p local1.info follow_master.sh: | ||
+ | exit 0 | ||
+ | </ | ||
+ | |||
+ | |||
+ | =====Implemention with Kubernetes===== | ||
+ | To implement pgpool can be done either | ||
+ | 1) Via variables | ||
+ | 2) Configmaps | ||
+ | |||
+ | In this case, we will use a config map: | ||
+ | |||
+ | < | ||
+ | apiVersion: v1 | ||
+ | kind: ConfigMap | ||
+ | metadata: | ||
+ | name: pgpool-config | ||
+ | namespace: db-test | ||
+ | labels: | ||
+ | app: pgpool-config | ||
+ | data: | ||
+ | pgpool.conf: | ||
+ | listen_addresses = ' | ||
+ | port = 9999 | ||
+ | socket_dir = '/ | ||
+ | pcp_listen_addresses = ' | ||
+ | pcp_port = 9898 | ||
+ | pcp_socket_dir = '/ | ||
+ | backend_hostname0 = experience-db-cluster-alinma-rw | ||
+ | backend_port0 = 5432 | ||
+ | backend_weight0 = 1 | ||
+ | backend_flag0 = ' | ||
+ | backend_auth_method0 = ' | ||
+ | backend_password0 = ' | ||
+ | backend_hostname1 = experience-db-cluster-alinma-ro | ||
+ | backend_port1 = 5432 | ||
+ | backend_weight1 = 1 | ||
+ | backend_flag1 = ' | ||
+ | backend_password1 = ' | ||
+ | backend_auth_method1 = ' | ||
+ | backend_hostname2 = experience-db-cluster-alinma-ro | ||
+ | backend_port2 = 5432 | ||
+ | backend_weight2 = 2 | ||
+ | backend_flag2 = ' | ||
+ | backend_password2 = ' | ||
+ | backend_auth_method2 = ' | ||
+ | sr_check_user = ' | ||
+ | sr_check_password = ' | ||
+ | sr_check_period = 10 | ||
+ | enable_pool_hba = on | ||
+ | master_slave_mode = on | ||
+ | num_init_children = 32 | ||
+ | max_pool = 4 | ||
+ | child_life_time = 300 | ||
+ | child_max_connections = 0 | ||
+ | connection_life_time = 0 | ||
+ | client_idle_limit = 0 | ||
+ | connection_cache = on | ||
+ | load_balance_mode = on | ||
+ | PGPOOL_PCP_USER = ' | ||
+ | PGPOOL_PCP_PASSWORD = ' | ||
+ | pcp.conf: |- | ||
+ | experience_db: | ||
+ | pool_passwd: | ||
+ | experience_db: | ||
+ | pool_hba.conf: | ||
+ | local | ||
+ | host all | ||
+ | host all | ||
+ | host all | ||
+ | </ | ||
+ | |||
+ | |||
+ | After we create that configmap with: | ||
+ | |||
+ | < | ||
+ | kk apply -f configmap.yaml | ||
+ | </ | ||
+ | |||
+ | We can create the deployment and the service now: | ||
+ | |||
+ | < | ||
+ | apiVersion: apps/v1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: pgpool | ||
+ | spec: | ||
+ | replicas: 3 | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: pgpool | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: pgpool | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: pgpool | ||
+ | image: pgpool/ | ||
+ | env: | ||
+ | - name: POSTGRES_USERNAME | ||
+ | value: " | ||
+ | - name: POSTGRES_PASSWORD | ||
+ | value: " | ||
+ | - name: PGPOOL_PASSWORD_ENCRYPTION_METHOD | ||
+ | value: " | ||
+ | - name: PGPOOL_ENABLE_POOL_PASSWD | ||
+ | value: " | ||
+ | - name: PGPOOL_SKIP_PASSWORD_ENCRYPTION | ||
+ | value: " | ||
+ | # The following settings are not required when not using the Pgpool-II PCP command. | ||
+ | # To enable the following settings, you must define a secret that stores the PCP user's | ||
+ | # username and password. | ||
+ | #- name: PGPOOL_PCP_USER | ||
+ | # valueFrom: | ||
+ | # secretKeyRef: | ||
+ | # name: pgpool-pcp-secret | ||
+ | # key: username | ||
+ | #- name: PGPOOL_PCP_PASSWORD | ||
+ | # valueFrom: | ||
+ | # secretKeyRef: | ||
+ | # name: pgpool-pcp-secret | ||
+ | # key: password | ||
+ | volumeMounts: | ||
+ | - name: pgpool-config | ||
+ | mountPath: /config | ||
+ | #- name: pgpool-tls | ||
+ | # mountPath: /config/tls | ||
+ | volumes: | ||
+ | - name: pgpool-config | ||
+ | configMap: | ||
+ | name: pgpool-config | ||
+ | # Configure your own TLS certificate. | ||
+ | # If not set, Pgpool-II will automatically generate the TLS certificate if ssl = on. | ||
+ | #- name: pgpool-tls | ||
+ | # secret: | ||
+ | # secretName: pgpool-tls | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: Service | ||
+ | metadata: | ||
+ | name: pgpool | ||
+ | spec: | ||
+ | selector: | ||
+ | app: pgpool | ||
+ | ports: | ||
+ | - name: pgpool-port | ||
+ | protocol: TCP | ||
+ | port: 9999 | ||
+ | targetPort: 9999 | ||
</ | </ |