mysql8_innodb_cluster

This is an old revision of the document!


Innodb CLuster is a modern solution to provide group replication for MySQL. In this configuration, one node is primary (RW) while the rest are ONLINE (RO)

Of course our IPs will look different because I am waaaay too lazy to care for proper pictures.

I am not big fan of manual work, ask my grandma about it. So I automated the whole shabang using Terraform + Ansible. Now, I will not show the terraform because it is strictly specific, but I will show you the ansible

Preparation

- name: Configure MySQL Libraries
  vars:
    ansible_user: opc
    ansible_become: yes
    ansible_become_method: sudo
    ansible_python_interpreter: /usr/bin/python
  hosts: dbservers
  tasks:
    - block:
      - name: Install the necessary packages
        yum:
          name:
            - mysql-server
            - mysql-shell
            - mysql-devel
            - mysql-common
            - mysql-libs
            - mysql
          state: present

      - name: Disable Selinux
        ansible.builtin.command: setenforce 0

      - name: Stop firewall
        ansible.builtin.command: service firewalld stop

After that you can configure the rest using the following ansible:

Configuration

- name: Configure MySQL Libraries
  vars:
    ansible_user: opc
    ansible_become: yes
    ansible_become_method: sudo
  hosts: dbservers
  tasks:
    - block:
      - name: Start the MySQL Service
        ansible.builtin.service:
          name: mysqld
          state: started

      - name: Check the existance of a mysql file
        stat:
          path: /root/.my.cnf
        register: mysql_file

      - name: Obtain & Change temporal password
        shell: |
          temporal_password=`awk '/A temporary password is generated for/ {a=$0} END{ print a }' /var/log/mysqld.log | awk '{print $(NF)}'`
          mysql -uroot -p${temporal_password} -e "ALTER USER 'root'@'localhost' IDENTIFIED BY '{{ root_password }}';flush privileges; " --connect-expired-password
          echo "[client]
          user=root
          password=\"{{ root_password }}\"" > /root/.my.cnf
        when: not mysql_file.stat.exists

      - name: Generate random server_id
        shell: |
          echo $((0x$(echo `hostname` | md5sum | cut -f 1 -d " " | cut -c 1-3))) | cut -c 1-3
        register: mysql_server_id

      - name: Stop MySQL If running
        service:
          name: mysqld
          state: stopped

      - name: Template the my.cnf
        template:
          dest: "/etc/my.cnf"
          src: templates/my.cnf.js.j2
          owner: "root"
          group: "root"
          mode: '0644'

      - name: Define the primary host (e.g. first host in the group)
        set_fact:
          primary_node: "{{ groups['dbservers'][0] }}"

      - name: Define the secondary nodes
        set_fact:
          secondary_nodes: "{{ groups['dbservers'][1:] }}"

      - name: Start MySQL
        service:
          name: mysqld
          state: started

      - name: Template bootstrapping js for mysqlsh
        template:
          dest: "/tmp/bootstrap_cluster.js"
          src: templates/bootstrap_cluster.js.j2
          owner: "mysql"
          group: "mysql"
          mode: '0644'

      - name: Template InnoDB Cluster
        template:
          dest: "/tmp/add_instances.js"
          src: templates/add_instances.js.j2
          owner: "mysql"
          group: "mysql"
          mode: '0644'

      - name: Bootstrap the cluster (new deployment)
        command: mysqlsh -f /tmp/bootstrap_cluster.js

      - name: Add the other instances
        command: mysqlsh -f /tmp/add_instances.js
        when: inventory_hostname in "{{ primary_node }}"

Congrats, now we have the cluster running:

Cluster status

{
    "clusterName": "InnoDBCluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "mysql-cluster-a:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "mysql-cluster-a:3306": {
                "address": "mysql-cluster-a:3306",
                "memberRole": "PRIMARY",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": "applier_queue_applied",
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.37"
            },
            "mysql-cluster-b:3306": {
                "address": "mysql-cluster-b:3306",
                "memberRole": "SECONDARY",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": "applier_queue_applied",
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.37"
            },
            "mysql-cluster-c:3306": {
                "address": "mysql-cluster-c:3306",
                "memberRole": "SECONDARY",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": "applier_queue_applied",
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.37"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "mysql-cluster-a:3306"
}

Because we want to look (only to look like, being one is boring) professional, here are the template files:

Templates

#my.cnf.js.j2
[mysqld]
server_id={{ mysql_server_id.stdout }}
binlog_checksum=NONE
gtid_mode=ON
enforce_gtid_consistency=ON
log_bin
log_slave_updates=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
skip-host-cache
skip-name-resolve
binlog_transaction_dependency_tracking=WRITESET

#bootstrap.js.j2
var cluster_user_password = "{{ cluster_user_password }}"
var clusterName = "{{ cluster_name }}"
var secondary_nodes = "{{ secondary_nodes }}"

var mysql = require('mysql')

try {
    var session = mysql.getClassicSession('root@localhost', '{{ root_password }}')
    session.runSql("create user '{{ cluster_user }}' identified by '{{ cluster_user_password }}'")
    session.runSql("GRANT ALL PRIVILEGES ON *.* TO {{ cluster_user }}@'%' WITH GRANT OPTION")
} catch(e){
    print('\n The innodb cluster admin user could not be created\n' + e.message + '\n')
}

#Adding the instances
var cluster_user = "{{ cluster_user }}"
var cluster_user_password = "{{ cluster_user_password }}";
var clusterName = "{{ cluster_name }}";
var secondary_nodes = "{{ secondary_nodes }}";
var secondary_nodes = secondary_nodes.replaceAll("u","").replaceAll("'","\"").replaceAll("\"","").replaceAll("[","").replaceAll("]","").replaceAll(" ","").split(',');

var mysql = require('mysql')

try {
    print('Adding instances...\n');
    shell.connect('{{ cluster_user }}@{{ primary_node }}', cluster_user_password);
    var cluster = dba.createCluster(clusterName);
    for(let i=0; i < {{ secondary_nodes|length }}; i++) {
        cluster.addInstance(`${cluster_user}@${secondary_nodes[i]}`, { password: `"${cluster_user_password}"`, recoveryMethod: "clone"} );
   }
} catch(e){
    print('\nThe InnoDB cluster could not be created.\n\nError: ' + e.message + '\n');
}
  • mysql8_innodb_cluster.1718979555.txt.gz
  • Last modified: 2024/06/21 14:19
  • by andonovj