Table of Contents

Overview

In this section, we will build 3 servers.

All that will be automated, so let's get going:

Configuration

Since all will be automated, we will use both:

  1. Vagrant: For the Virtual Machines
  2. Ansible: For the MySQL Replication

That will allow us to automate the entire process :)

Let's get firstly with the VM:

VM Automation

I have prepared some simple vagrant file, which when you run, you will have the following output:

VM CreationFile

PS C:\Learning\Oracle Interview\VMs\MySQLSetup> vagrant up
******************************************************************
    mysqlsrv2:   PyYAML.x86_64 0:3.10-11.el7
    mysqlsrv2:   libyaml.x86_64 0:0.1.4-11.el7_0
    mysqlsrv2:   python-babel.noarch 0:0.9.6-8.el7
    mysqlsrv2:   python-backports.x86_64 0:1.0-8.el7
    mysqlsrv2:   python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7
    mysqlsrv2:   python-cffi.x86_64 0:1.6.0-5.el7
    mysqlsrv2:   python-enum34.noarch 0:1.0.4-1.el7
    mysqlsrv2:   python-idna.noarch 0:2.4-1.el7
    mysqlsrv2:   python-ipaddress.noarch 0:1.0.16-2.el7
    mysqlsrv2:   python-jinja2.noarch 0:2.7.2-4.el7
    mysqlsrv2:   python-markupsafe.x86_64 0:0.11-10.el7
    mysqlsrv2:   python-paramiko.noarch 0:2.1.1-9.el7
    mysqlsrv2:   python-ply.noarch 0:3.4-11.el7
    mysqlsrv2:   python-pycparser.noarch 0:2.14-1.el7
    mysqlsrv2:   python-setuptools.noarch 0:0.9.8-7.el7
    mysqlsrv2:   python-six.noarch 0:1.9.0-2.el7
    mysqlsrv2:   python2-cryptography.x86_64 0:1.7.2-2.el7
    mysqlsrv2:   python2-httplib2.noarch 0:0.18.1-3.el7
    mysqlsrv2:   python2-jmespath.noarch 0:0.9.4-2.el7
    mysqlsrv2:   python2-pyasn1.noarch 0:0.1.9-7.el7
    mysqlsrv2:   sshpass.x86_64 0:1.06-2.el7
    mysqlsrv2:
    mysqlsrv2: Complete!
==> mysqlsrv2: Running provisioner: shell...
    mysqlsrv2: Running: inline script

You can see the vagrant file in the APPENDIX at the end of the section, alongside with the other automation files:

Once we have the VMs, connect to the “ansiblecentral” using the following oriantation:

Network topology

10.100.199.200 ansiblecentral 
10.100.199.201 mysqlsrv1 
10.100.199.202 mysqlsrv2 

MySQL Replication Automation

Using Ansible, I have created several files, which are all automated and deploy MySQL Replication in the matter of minutes:

Deplyo MySQL Replication

[root@ansiblecentral vagrant]# ./deployAllToRun.sh

PLAY [install MySQL Servers] **********************************************************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************************************************************
ok: [10.100.199.201]
ok: [10.100.199.202]

TASK [Install MySQL 5.7 repo] *********************************************************************************************************************************************************
ok: [10.100.199.201]
ok: [10.100.199.202]
***************************************************
PLAY RECAP ****************************************************************************************************************************************************************************
10.100.199.201             : ok=15   changed=11   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
10.100.199.202             : ok=14   changed=11   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

[root@ansiblecentral vagrant]#

Your OK and changed values, might differ.

All that will deploy MySQL with Replication.

Appendix

Vagrant File

Vagrant file

$allow_shell = <<SCRIPT
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config    
systemctl restart sshd.service
SCRIPT

$edit_hosts_file = <<SCRIPT
cat > /etc/hosts <<EOF
127.0.0.1 localhost

10.100.199.200 ansiblecentral 
10.100.199.201 mysqlsrv1 
10.100.199.202 mysqlsrv2 
EOF
SCRIPT


$install_ansible = <<SCRIPT
sudo yum -y install epel-release
sudo yum -y install ansible

cat > /etc/ansible/hosts <<EOF
[ansiblecentral]
10.100.199.200
 
[mysqlsrvs]
10.100.199.201
10.100.199.202
 
[mysqlsrvs:vars]
ansible_user=root
ansible_password=vagrant
ansible_connection=ssh
EOF

sed -i 's/#host_key_checking = False/host_key_checking = False/g' /etc/ansible/ansible.cfg  
SCRIPT

$install_mysql_community_plugin = <<SCRIPT
ansible-galaxy collection install community.mysql
SCRIPT

$disable_selinux = <<SCRIPT
setenforce 0
SCRIPT



# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
  config.vm.box_download_insecure = true
  config.vm.provision :shell, inline: "setenforce 0", run: "always"
  config.vm.define "ansiblecentral" do |ansiblecentral|
    ansiblecentral.vm.box = "centos/7"
    ansiblecentral.vm.hostname = 'ansiblecentral'
	ansiblecentral.vm.network :private_network, ip: "10.100.199.200"
	ansiblecentral.vm.provision "shell", inline: $allow_shell, privileged: true
	ansiblecentral.vm.provision "shell", inline: $edit_hosts_file, privileged: true
	ansiblecentral.vm.provision "shell", inline: $install_ansible, privileged: true
	ansiblecentral.vm.provision "shell", inline: $disable_selinux, privileged: true
	ansiblecentral.vm.provision "shell", inline: $install_mysql_community_plugin, privileged: true
	ansiblecentral.vm.synced_folder "./MySQLSetup", "/vagrant", type: "virtualbox"
    ansiblecentral.vm.provider :virtualbox do |v|
      v.customize ["modifyvm", :id, "--memory", 1024]
	  v.customize ["storageattach", :id, 
                "--storagectl", "IDE", 
                "--port", "0", "--device", "1", 
                "--type", "dvddrive", 
                "--medium", "emptydrive"]    
    end
  end

	(1..2).each do |i|
	config.vm.box_download_insecure = true
	config.vm.provision :shell, inline: "setenforce 0", run: "always"
    config.vm.define "mysqlsrv#{i}" do |worker|
      worker.vm.box = "centos/7"
      worker.vm.box_check_update = true
      worker.vm.network :private_network, ip: "10.100.199.20#{i}"
      worker.vm.hostname = "worker0#{i}"
	  worker.vm.provision "shell", inline: $allow_shell, privileged: true
	  worker.vm.provision "shell", inline: $edit_hosts_file, privileged: true
	  worker.vm.provision "shell", inline: $install_ansible, privileged: true
	  worker.vm.provision "shell", inline: $disable_selinux, privileged: true
      worker.vm.provider "virtualbox" do |vb|
        vb.name = "worker0#{i}"
        vb.memory = "1024"
      end
	  worker.vm.provider "virtualbox" do |v|
      v.customize ["modifyvm", :id, "--memory", 1024]
	  v.customize ["storageattach", :id, 
                "--storagectl", "IDE", 
                "--port", "0", "--device", "1", 
                "--type", "dvddrive", 
                "--medium", "emptydrive"]    
    end
    end
  end
end

MySQL Basic Config

MySQL Basic Config

---
- name: install MySQL Servers 
  hosts: mysqlsrvs
  vars:
    root_password: "{{ root_password }}"
  remote_user: root

  tasks:
  - name: Install MySQL 5.7 repo
    yum: name=http://dev.mysql.com/get/mysql57-community-release-el7-8.noarch.rpm state=present
    
  - name: Install Percona repo
    yum: name=https://repo.percona.com/yum/percona-release-latest.noarch.rpm state=present

  - name: Install MySQL 5.7
    yum: 
      pkg: ['mysql-community-server', 'mysql-community-client', 'MySQL-python']

  - name: Install Percona Backup
    yum: 
      pkg: ['percona-xtrabackup-24']

  - name: Start the MySQL service
    service: name=mysqld state=started enabled=true

  - name: Change mysql root password and keep track in 
    shell: |
      password_match=`awk '/A temporary password is generated for/ {a=$0} END{ print a }' /var/log/mysqld.log | awk '{print $(NF)}'`
      echo $password_match
      mysql -uroot -p$password_match --connect-expired-password -e "ALTER USER 'root'@'localhost' IDENTIFIED BY '{{ root_password }}';flush privileges; "
      echo "[client]
      user=root
      password=\"{{ root_password }}\"" > /root/.my.cnf
    args:
      creates: /root/.my.cnf
    register: change_temp_pass
    notify: restart mysqld

  - meta: flush_handlers
  - debug:
      var: change_temp_pass
  handlers:
    - name: restart mysqld
      service: name=mysqld
               state=restarted

MySQL Replication Config

MySQL Replication Config

---
  
- hosts: mysqlsrvs[0]
  vars:
    replication_password: "{{ replication_password }}"
  tasks:
    - name: create Optional file directory
      file:
        path: /etc/mysql
        state: directory
        owner: root
        group: root
        mode: 0755
    - name: create Optional file 
      ansible.builtin.copy:
        src: /vagrant/master.cnf
        dest: /etc/mysql/my.cnf
        owner: root
        group: root
        mode: 0755
    - name: Stop slave
      mysql_replication:
        mode: stopslave
    - name: Reset slave
      mysql_replication:
        mode: resetslave
    - name: Restart MySQL on master
      service:
        name: mysqld
        state: restarted
    - name: Copy SQL file
      ansible.builtin.copy:
        src: /vagrant/interview_tables_dump.sql
        dest: /etc/interview_tables_dump.sql
        owner: mysql
        group: mysql
        mode: 0755
        follow: yes
    - name: Configure MySQL Master
      community.mysql.mysql_user:
        login_user: root
        name: replication 
        host: '%'
        password: "{{ replication_password }}"
        priv: "*.*:REPLICATION SLAVE,REPLICATION CLIENT"
        state: present
    - name: Create RepDB
      community.mysql.mysql_db:
        login_user: root
        state: import
        name: repDB
        target: /etc/interview_tables_dump.sql
    - name: create temporary backup directory on master
      tempfile:
        state: directory
        prefix: ansible-
        suffix: .mysqlslave
      register: backupdir
    - name: dump database from master
      shell: |
        innobackupex -uroot --no-timestamp {{ backupdir.path }}/dump 2>&1 | tail -n 1
      register: innobackupex
      failed_when: '"completed OK!" not in innobackupex.stdout'
    - name: apply log to database dump
      shell: |
        innobackupex -uroot --apply-log {{ backupdir.path }}/dump 2>&1 | tail -n 1
      register: apply_log
      failed_when: '"completed OK!" not in apply_log.stdout'
    - name: compress database dump
      archive:
        path: "{{ backupdir.path }}/dump/"
        dest: "{{ backupdir.path }}/dump.tar.gz"
        format: gz
        owner: root
        group: root
        mode: 0600
        remove: true
    - name: create temporary directory on localhost
      delegate_to: localhost
      tempfile:
        state: directory
        prefix: ansible-
        suffix: .mysqlslave
      register: local_tmp
    - name: Sync database dump from master
      synchronize:
        src: "{{ backupdir.path }}/dump.tar.gz"
        dest: /tmp/
        mode: pull
      delegate_to: "{{ groups['mysqlsrvs'][1] }}"
      
- hosts: mysqlsrvs[1]
  vars:
    replication_password: "{{ replication_password }}"
  tasks:
    - name: stop MySQL on slave
      service:
        name: mysqld
        state: stopped
    - name: delete slave data
      file:
        path: /var/lib/mysql
        state: absent
    - name: create /var/lib/mysql
      file:
        path: /var/lib/mysql
        state: directory
        owner: mysql
        group: mysql
        mode: 0755
    - name: uncompress database dump
      unarchive:
        src: "/tmp/dump.tar.gz"
        dest: /var/lib/mysql
        owner: mysql
        group: mysql
        remote_src: yes
    - name: create Optional file directory
      file:
        path: /etc/mysql
        state: directory
        owner: root
        group: root
        mode: 0755
    - name: create Optional file 
      ansible.builtin.copy:
        src: /vagrant/slave.cnf
        dest: /etc/mysql/my.cnf
        owner: root
        group: root
        mode: 0755
    - name: start MySQL on slave
      service:
        name: mysqld
        state: started
    - name: Stop slave
      mysql_replication:
        mode: stopslave
    - name: Reset slave
      mysql_replication:
        mode: resetslave
    - name: get binlog file
      command: >-
        sed -r 's/^(.*)\s[0-9]+$/\1/'
        /var/lib/mysql/xtrabackup_binlog_pos_innodb
      args:
        warn: false
      register: binlog_file
    - name: get binlog position
      command: >-
        sed -r 's/^.*\s([0-9]+)$/\1/'
        /var/lib/mysql/xtrabackup_binlog_pos_innodb
      args:
        warn: false
      register: binlog_position
    - name: configure MySQL slave process
      mysql_replication:
        master_host: "{{ groups['mysqlsrvs'][0] }}"
        master_user: replication
        master_password: "{{ replication_password }}"
        master_log_file: "{{ binlog_file.stdout }}"
        master_log_pos: "{{ binlog_position.stdout }}"
        mode: changemaster
    - name: start MySQL slave process
      mysql_replication:
        mode: startslave

MySQL Switchover

MySQL Switchover

---
  
- hosts: mysqlsrvs[0]
  vars:
    replication_password: "{{ replication_password }}"
  tasks:
    - name: Lock the database
      shell: |
        mysql -u root  -e "FLUSH TABLES WITH READ LOCK;" 
    - name: Get bin log file for Master
      shell: |
        mysql -u root -A --skip-column-names -e "SHOW MASTER STATUS;" | awk '{print $1}'
      args:
        warn: false
      register: binlog_file_master
    - name: Get bin log position for Master
      shell: |
        mysql -u root -A --skip-column-names -e "SHOW MASTER STATUS;" | awk '{print $2}' 
      args:
        warn: false
      register: binlog_position_master
    - name: Get bin log file for Slave
      shell: |
        mysql -u replication -p{{ replication_password }}  --host {{ groups['mysqlsrvs'][1] }} -A  -e "SHOW SLAVE STATUS\G;" | grep -w "Master_Log_File" | awk -F ": " '{print $2}'
      args:
        warn: false
      register: binlog_file_slave
      failed_when: binlog_file_slave.stdout != binlog_file_master.stdout
    - name: Get bin log position for Slave
      shell: |
        mysql -u replication -p{{ replication_password }} -A --host {{ groups['mysqlsrvs'][1] }} -e "SHOW SLAVE STATUS\G;" | grep -w "Exec_Master_Log_Pos" | awk -F ": " '{print $2}'
      args:
        warn: false
      register: binlog_position_slave
      failed_when: binlog_position_slave.stdout != binlog_position_master.stdout
    - name: Stop the old master
      service:
        name: mysqld
        state: stopped
    - name: create Optional file directory
      file:
        path: /etc/mysql
        state: directory
        owner: root
        group: root
        mode: 0755
    - name: create Optional file 
      ansible.builtin.copy:
        src: /vagrant/slave.cnf
        dest: /etc/mysql/my.cnf
        owner: root
        group: root
        mode: 0755
        follow: yes
        
- hosts: mysqlsrvs[1]
  tasks:
    - name: create Optional file directory
      file:
        path: /etc/mysql
        state: directory
        owner: root
        group: root
        mode: 0755
    - name: create Optional file 
      ansible.builtin.copy:
        src: /vagrant/master.cnf
        dest: /etc/mysql/my.cnf
        owner: root
        group: root
        mode: 0755
        follow: yes
    - name: Stop slave
      mysql_replication:
        mode: stopslave
    - name: Reset Slave
      mysql_replication:
        mode: resetslave
    - name: Restart the new master
      service:
        name: mysqld
        state: restarted

- hosts: mysqlsrvs[0]
  vars:
    replication_password: "{{ replication_password }}"
  tasks:
    - name: Start Mysqld on new Slave
      service:
        name: mysqld
        state: started
    - name: Get bin log file for Master
      shell: |
        mysql -u replication -p{{ replication_password }} -A --host {{ groups['mysqlsrvs'][1] }} --skip-column-names -e "SHOW MASTER STATUS;" | awk '{print $1}'
      args:
        warn: false
      register: binlog_file
    - name: Get bin log position for Master
      shell: |
        mysql -u replication -p{{ replication_password }} -A --host {{ groups['mysqlsrvs'][1] }} --skip-column-names -e "SHOW MASTER STATUS;" | awk '{print $2}'
      args:
        warn: false
      register: binlog_position
    - debug:
        msg: "{{ binlog_file }}"
    - debug:
        msg: "{{ binlog_position }}"
    - name: Stop slave
      mysql_replication:
        mode: stopslave
    - name: Reset slave
      mysql_replication:
        mode: resetslave
    - debug:
        msg: "{{ replication_password }}"
    - name: Configuration Replication
      mysql_replication:
        master_host: "{{ groups['mysqlsrvs'][1] }}"
        master_user: replication
        master_password: "{{ replication_password }}"
        master_log_file: "{{ binlog_file.stdout }}"
        master_log_pos: "{{ binlog_position.stdout }}"
        mode: changemaster
    - name: start MySQL slave process
      mysql_replication:
        mode: startslave

deployAllToRun.sh

Last script will run all the scripts:

deploy Script

#!/bin/bash
###MySQL Deployment
ansible-playbook mysql-playbook.yml --extra-vars "root_password=mysql_root_password"

###Replication Deployment
ansible-playbook mysql-replication-playbook.yml --extra-vars "replication_password=replication_password"

###Switchover###
###ansible-playbook mysql-replication-switchover.yml --extra-vars "replication_password=mysql_replication_password"

master.cnf

Master configuration file

[mysqld]
log-bin=mysql-bin
server-id=1
binlog-do-db=repDB

Slave.cnf

Slave Configuration File

[mysqld]
server-id=2
read_only=1