Table of Contents

Overview

Ansible and Oracle allow you to automate a lot of things around the OCI. In this module we will address the automation of:

In later modules, we will make it even better with the introduction of roles. So let's get going. P.S. A disclaimer is in order here….I will not share my compartment or SSH keys…..obviously :D

Compute Node

Ansible has variable of modules for the compute node or for anything in general, but the following playbook automate the creation of a compute node.

Of course you need to create a VCN prior that with the correct name:

Create a compute node

(venv) julien.andonov@julienandonovs-MacBook-Pro venv % ansible-playbook -i oci_inventory.py setup_computenode.yaml --extra-vars "hostname=jdbwiki"
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details

PLAY [localhost] **********************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [set_fact] ***********************************************************************************************************************************************************************************************************

Actually that command, created the server on which that page is currently running :D

Playbook

The playbook you can see below:

setup_computenode.yml

---

- hosts: localhost
  collections:
    - oracle.oci
  tasks:
    - block:
      - name: Get Running Instance ID
        oci_compute_instance_facts:
          compartment_id: "{{ instance_compartment }}"
          lifecycle_state: "RUNNING"
        register: instance_facts

      - name: Launch an instance
        oci_compute_instance:
          availability_domain: "{{ instance_ad }}"
          compartment_id: "{{ instance_compartment }}"
          name: "{{ hostname }}"
          image_id: "{{ instance_image }}"
          shape: "{{ instance_shape }}"
          create_vnic_details:
              assign_public_ip: True
              hostname_label: "{{ hostname }}"
              subnet_id: "{{ instance_subnet_id }}"
          metadata:
              ssh_authorized_keys: "{{ SSH_AUTHORIZED_KEYS }}"
          freeform_tags: "{'tier': 'spares'}"
          wait: True
        when: (instance_facts.instances | regex_findall('{{ hostname }}') | length) == 0
        register: result

      - name: Print Result
        debug:
          msg: "{{ result }}"

      - name: Get the facts when instance is already created
        oci_compute_instance_facts:
          compartment_id: "{{ instance_compartment }}"
          display_name: "{{ hostname }}"
          lifecycle_state: "RUNNING"
        register: instances

      - name: Print Instances
        debug:
          msg: "{{ instances }}"

      - set_fact:
          result: "{{ instances.instances[0] }}"

      - name: Print instance details
        debug:
          msg: "Launched a new instance {{ result.id }}"
      - set_fact:
          instance_id: "{{ result.id }}"
        when: instances is defined

      - name: Get the VNIC attachment details of instance
        oci_compute_vnic_attachment_facts:
          compartment_id: "{{ instance_compartment }}"
          instance_id: "{{ instance_id }}"
        register: result

      - name: Get details of the VNIC
        oci_network_vnic_facts:
          id: "{{ result.vnic_attachments[0].vnic_id }}"
        register: result
      - set_fact:
          instance_public_ip: "{{result.vnic.public_ip}}"

      - name: Print the public ip of the newly launched instance
        debug:
          msg: "Public IP of launched instance {{ instance_public_ip }}"

      - name: Wait (upto 10 minutes) for port 22 to become open
        wait_for:
          port: 22
          host: '{{ instance_public_ip }}'
          state: started
          delay: 10
          timeout: 600
        vars:
          ansible_connection: local

      - set_fact:
          ssh_command_with_generated_key: ssh -i "/Users/julien.andonov/.oci/id_oci" -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" opc@{{ instance_public_ip }} uname -a
      
      - set_fact:
          ssh_command_with_custom_key: ssh -i "/Users/julien.andonov/.oci/id_oci" -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" opc@{{ instance_public_ip }} uname -a

      - name: Attempt a ssh connection to the newly launched instance
        command: "{{ ssh_command_with_custom_key if 'SAMPLE_PUBLIC_SSH_KEY' in ansible_env else ssh_command_with_generated_key }}"
        retries: 3
        delay: 5
        register: result
        until: result.rc == 0

      - name: Print SSH response from launched instance
        debug:
          msg: "SSH response from instance -> {{ result.stdout_lines }}"

      - name: Add host
        add_host:
          name: "{{ hostname }}"
          ansible_ssh_host: "{{ instance_public_ip }}"

So the playbook, won't create a compute node which already exist and it will also verify that the server has been created :)

MySQL Setup

Now, since OCI stands for Oracle Cloud Interface, well…they have Oracle Linux and AS SUCH, by default it is MySQL 8, so…my first task is to DELETE the MySQL 8 :D in my playbook. Then I install the new packages and so on so far…. :)

MySQL Deployment

ansible-playbook -i oci_inventory.py setup_mysql.yml --extra-vars "hostname=db-stg-001" --extra-vars "root_password=UlraSecurePassword123#" --extra-vars "version=5.7

For the people who are wondering what the: “oci_inventory.py” is. It is a file by Oracle, which allows you to create dynamic inventory based on your OCI, so I don't have to keep a track of if a server still exist or I killed it….brutally I might add.

Playbook

Again, the playbook you can see below:

setup_mysql.yml

---

- hosts: localhost
  collections:
    - oracle.oci
  vars_files:
    - /Users/julien.andonov/Work/POCE-playbooks/oci-ansible-collection/samples/compute/venv/variables.yml

  tasks:
    - block:
      - name: Get instance details
        oci_compute_instance_facts:
          compartment_id: "{{ instance_compartment }}"
          lifecycle_state: "RUNNING"
          display_name: "{{ hostname }}"
        register: instances_details

      - name: Get facts
        debug:
          msg: "{{ instances_details }}"

      - name: Save instance info
        set_fact:
          instance: '{{ instances_details["instances"] }}'
          instance_ips: '{{instances_details["instances"] | map(attribute="primary_public_ip")  }}'

      - name: Add instance(s) to in-memory inventory
        add_host:
          name: '{{ item }}'
        loop: '{{ instance_ips }}'


- name: Configure MySQL
  vars:
    ansible_user: opc
    ansible_become: yes
    ansible_become_method: sudo

  hosts: '{{ hostvars["localhost"]["instance_ips"]}}'
  tasks:
  - block:
    - name: Remove repository (and clean up left-over metadata)
      file:
        path: /etc/yum.repos.d/mysql-ol7.repo
        state: absent 

    - name: Remove MySQL 8.0 packages
      yum:
        name:
          - mysql-community-client-plugins-8*
          - mysql-community-common-8*
          - mysql-community-libs-8*
          - mysql-community-libs-compat-8*
        state: absent

    - name: yum-clean-metadata
      command: yum clean metadata
      args:
        warn: no

    - name: Add repository
      yum_repository:
        name: mysql-ol7-5.7
        baseurl: https://yum$ociregion.$ocidomain/repo/OracleLinux/OL7/MySQL57_community/$basearch/
        description: MySQL 5.7 Repo
        enabled: yes
        gpgcheck: yes
        gpgkey: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

    - name: Install MySQL Packages
      yum:
        name:
          - mysql-community-server
          - mysql-community-client
          - MySQL-python
        state: present

    - name: Stop MySQL
      service: name=mysqld state=stopped enabled=false

    - name: Change ownership of MySQL Directory
      ansible.builtin.file:
        path: /var/lib/mysql/
        owner: mysql
        group: mysql
        mode: 0755

    - name: Re-create the Data Directory
      ansible.builtin.file:
        path: /var/lib/mysql/mysql5/
        state: absent

    - name: Create MySQL Data directory if it doesn't exist
      ansible.builtin.file:
        path: /var/lib/mysql/mysql5/
        owner: mysql
        group: mysql
        state: directory
        mode: 0755

    - name: Create MySQL Log directory if it doesn't exist
      ansible.builtin.file:
        path: /var/log/mysql/
        state: directory
        owner: mysql
        group: mysql
        mode: '0755'

    - name: Create MySQL Log file
      ansible.builtin.file:
        path: /var/log/mysql/slow.log
        state: touch
        owner: mysql
        group: mysql
        mode: '0640'


    - name: Stop MySQL
      service: name=mysqld state=stopped enabled=true

    - name: Get last three characters of a string
      shell: |
        hostname | tail -c 4
      register: server_id

    - name: Modify the config file
      shell: |
        echo "
        [mysqld]
        back_log = 128
        binlog_cache_size = 128K
        binlog_row_image = minimal
        character_set_server = utf8
        collation_server = utf8_bin
        datadir = /var/lib/mysql/mysql5/
        expire_logs_days = 10
        innodb_adaptive_hash_index = 0
        innodb_buffer_pool_instances = 4
        innodb_buffer_pool_size = 9216M
        innodb_data_file_path = ibdata1:10M:autoextend
        innodb_data_home_dir = /var/lib/mysql/mysql5/
        innodb_flush_log_at_trx_commit = 2
        innodb_flush_method = O_DIRECT
        innodb_io_capacity = 300
        innodb_lock_wait_timeout = 120
        innodb_log_file_size = 1G
        innodb_log_files_in_group = 2
        innodb_log_group_home_dir = /var/lib/mysql/mysql5/
        innodb_numa_interleave = 1
        innodb_purge_batch_size = 1000
        innodb_purge_threads = 8
        innodb_read_io_threads = 12
        innodb_tmpdir = /var/lib/mysql/mysql5/
        innodb_write_io_threads = 12
        join_buffer_size = 128K
        key_buffer_size = 32M
        local_infile = 0
        log_bin = /var/lib/mysql/mysql5/{{ hostname }}-bin.log
        log_error = /var/log/mysql/mysql-error.log
        log_slave_updates = 1
        log_slow_admin_statements = 1
        lower_case_table_names = 1
        max_allowed_packet = 32M
        max_connect_errors = 50000
        max_connections = 500
        max_heap_table_size = 64M
        max_relay_log_size = 1G
        open_files_limit = 32768
        read_rnd_buffer_size = 2M
        relay_log = mysql-relay-bin
        relay_log_index = mysql-relay-bin.index
        relay_log_info_repository = TABLE
        relay_log_recovery = ON
        replicate_ignore_db = mysql
        replicate_ignore_table = mysql.user
        replicate_ignore_table = mysql.db
        replicate_ignore_table = mysql.host
        replicate_ignore_table = mysql.tables_priv
        replicate_ignore_table = mysql.columns_priv
        replicate_ignore_table = mysql.procs_priv
        replicate_ignore_table = mysql.proxies_priv
        report_host = {{ hostname }}
        server-id = {{ server_id.stdout }}
        skip_name_resolve = 1
        skip_slave_start = 1
        slave_compressed_protocol = 1
        slave_parallel_type = LOGICAL_CLOCK
        slave_parallel_workers = 2
        slave_preserve_commit_order = 1
        slow_query_log = 1
        slow_query_log_file = /var/log/mysql/slow.log
        socket = /var/lib/mysql/mysql5/mysql.sock
        sort_buffer_size = 32M
        sql_mode = PIPES_AS_CONCAT,NO_AUTO_VALUE_ON_ZERO,TRADITIONAL
        sync_binlog = 0
        sysdate_is_now = 1
        table_open_cache = 8192
        thread_cache_size = 128
        tmp_table_size = 64M
        tmpdir = /tmp
        user = mysql" > /etc/my.cnf

    - name: Start MySQL
      service: name=mysqld state=started enabled=true

    - name: Check if a file exist
      stat:
        path: /root/.my.cnf
      register: mysql_file

    - name: Obtain & Change temporal password 
      shell: |
          temporal_password=`awk '/A temporary password is generated for/ {a=$0} END{ print a }' /var/log/mysql/mysql-error.log | awk '{print $(NF)}'`
          mysql -uroot -p${temporal_password} --connect-expired-password -S /var/lib/mysql/mysql5/mysql.sock -e "ALTER USER 'root'@'localhost' IDENTIFIED BY '{{ root_password }}';flush privileges;
          echo "[client]
          user=root
          password=\"{{ root_password }}\" > /root/.my.cnf
      when: not mysql_file.stat.exists

    - name: Create mysql replication user 
      community.mysql.mysql_user:
        login_user: root
        login_password: "{{ root_password }}"
        login_unix_socket: /var/lib/mysql/mysql5/mysql.sock
        name: replication
        host: '%'
        priv: "*.*:REPLICATION CLIENT, REPLICATION SLAVE"
        password: "{{ root_password }}"

Now, of course, I will change the configuration with a template and so on, but that is for a later module.

MySQL Replication & Refresh

Now, in order to configure replication and refresh. We need to implement the snapshot of the block device. But wait…we didn't configure a block device yet, so let's get that going.

Block device

To configure a block device you can use the following command:

Block device config

ansible-playbook -i oci_inventory.py setup_storage_computenode.yml --extra-vars "hostname=db-stg-001"

The playbook itself, you can see below:

setup_storage_computenode.yml

---

- hosts: localhost
  collections:
    - oracle.oci
  tasks:
    - block:   
      - name: Get instance details
        oci_compute_instance_facts:
          compartment_id: "{{ instance_compartment }}"
          lifecycle_state: "RUNNING"
          display_name: "{{ hostname }}"
        register: instances_details

      - name: Get facts
        debug:
          msg: "{{ instances_details }}"

      - name: Save instance info
        set_fact:
          instance: '{{ instances_details["instances"] }}'
          instance_ips: '{{instances_details["instances"] | map(attribute="primary_public_ip")  }}'
      
      - name: Show instances name
        debug:
          msg: '{{instance}}'

      - name: Show instance's IPs
        debug:
          msg: '{{instance_ips}}'

      - name: Add instance(s) to in-memory inventory
        add_host:
          name: '{{ item }}'
        loop: '{{ instance_ips }}'


      - name: Get hostvars
        debug:
          msg: '{{ hostvars["localhost"]["instance_ips"]}}'

      - name: Create a block device
        oci_blockstorage_volume:
          compartment_id: "{{ instance_compartment }}"
          availability_domain: "{{ instances_details.instances[0].availability_domain }}"
          display_name: "{{ hostname }}-bv1"
          size_in_gbs: 20
        register: created_volume

      - name: Get the block device data
        oci_blockstorage_volume_facts:
          display_name: "{{ hostname }}-bv1"
          availability_domain: "{{ instance_ad }}"
          compartment_id: "{{ instance_compartment }}"
          lifecycle_state: "AVAILABLE"
        register: compute_node_block_volume

      - name: Print block devices 
        debug:
          msg: "{{ compute_node_block_volume }}"
        
      - name: Attach volume_attachment
        oci_compute_volume_attachment:
          compartment_id: "{{ instance_compartment }}"
          instance_id: "{{ instances_details.instances[0].id }}"
          type: service_determined
          volume_id: "{{ compute_node_block_volume.volumes[0].id }}"
        regoster: attached_block_device

      - name: Save attachement commands
        set_fact: 
          iscsi_attach_commands: '{{ attached_block_device.volume_attachment.iscsi_attach_commands }}'

- import_playbook: execute_shell_commands_root.yml
  vars:
    instance_ip: '{{ hostvars["localhost"]["instance_ips"] }}'
    bash_commands: '{{ hostvars["localhost"]["iscsi_attach_commands"] }}'

That playbook, will create a block device, attach it and also attach it on the host using the following playbook:

execute_shell_commands

---

- name: Attach the device
  vars:
    ansible_user: opc
    ansible_become: yes
    ansible_become_method: sudo

  hosts: '{{ instance_ip }}'
  tasks:
    - block:
      - name: Execute commands
        ansible.builtin.command: '{{ item }}'
        loop: '{{ bash_commands }}'

With all that, we can finally configure the refresh (resnap) and configure the replication:

Resnap the primary block device

setup_resnap

---

- hosts: localhost
  collections:
    - oracle.oci
  vars_files:
    - /Users/julien.andonov/Work/POCE-playbooks/oci-ansible-collection/samples/compute/venv/variables.yml

  tasks:
    - block:
      - name: Get Replica instance details
        oci_compute_instance_facts:
          compartment_id: "{{ instance_compartment }}"
          lifecycle_state: "RUNNING"
          display_name: "{{ REPLICA }}"
        register: replica_details

      - name: Save Replica info
        set_fact:
          instance: '{{ replica_details["instances"] }}'
          replica_ip: '{{replica_details["instances"] | map(attribute="primary_public_ip")  }}'

      - name: Add replica instance(s) to in-memory inventory
        add_host:
          name: "replica_ip"
          ansible_host: '{{ replica_ip }}'

      - name: Get attached devices on Replica
        oci_compute_volume_attachment_facts:
          compartment_id: "{{ instance_compartment }}"
          instance_id: "{{ replica_details.instances[0].id }}"
        register: replica_volume_attachment_facts

- import_playbook: detach_iscsi.yml
  vars:
    instance_ip: '{{ hostvars["localhost"]["replica_ip"] }}'

- name: Create iSCSI attachement
  hosts: localhost
  collections:
    - oracle.oci
  vars_files:
    - /Users/julien.andonov/Work/POCE-playbooks/oci-ansible-collection/samples/compute/venv/variables.yml
  tasks:
    - block:
      - name: Get Primary instance details
        oci_compute_instance_facts:
          compartment_id: "{{ instance_compartment }}"
          lifecycle_state: "RUNNING"
          display_name: "{{ PRIMARY }}"
        register: primary_details

      - name: Save primary info
        set_fact:
          instance: '{{ primary_details["instances"] }}'
          primary_ip: '{{primary_details["instances"] | map(attribute="primary_public_ip")  }}'
          primary_prv_ip: '{{primary_details["instances"] | map(attribute="primary_private_ip")  }}'

      - name: Add primary instance(s) to in-memory inventory
        add_host:
          name: '{{ item }}'
        loop: '{{ primary_ip }}'

      - name: Get attached devices on Primary
        oci_compute_volume_attachment_facts:
          compartment_id: "{{ instance_compartment }}"
          instance_id: "{{ primary_details.instances[0].id }}"
        register: primary_volume_attachment_facts

      - name: Get all devices on Replica
        oci_compute_volume_attachment_facts:
          compartment_id: "{{ instance_compartment }}"
          instance_id: "{{ replica_details.instances[0].id }}"
        register: replica_volume_attachment_facts

      - name: Set fact, attached device on the replica
        set_fact:
          replica_attached_blockdevice_id: '{{ replica_volume_attachment_facts.volume_attachments | selectattr("lifecycle_state", "equalto", "ATTACHED") | map(attribute="id") }}'
          replica_attached_volume_id: '{{ replica_volume_attachment_facts.volume_attachments | selectattr("lifecycle_state", "equalto", "ATTACHED") | map(attribute="volume_id") }}'

      - name: Detach block level attachement device from Clone
        oci_compute_volume_attachment:
          volume_attachment_id: '{{ replica_attached_blockdevice_id[0] }}'
          state: absent
        when: replica_attached_blockdevice_id|length != 0

      - name: Delete the block volume
        oci_blockstorage_volume:
          volume_id: '{{ replica_attached_volume_id[0] }}'
          state: absent
        when: replica_attached_volume_id|length != 0

      - name: Create a clone data from another block volume in the same availability domain
        oci_blockstorage_volume:
          name: '{{ REPLICA }}-bv-clone'
          source_details:
            id: '{{ primary_volume_attachment_facts.volume_attachments[0].volume_id }}'
            type: 'volume'
          availability_domain: '{{ instance_ad }}'
          compartment_id: '{{ instance_compartment }}'
          size_in_gbs: 50
        register: created_clone_block_volume

      - name: Attach the clone to the replica
        oci_compute_volume_attachment:
          compartment_id: "{{ instance_compartment }}"
          instance_id: "{{ replica_details.instances[0].id }}"
          type: service_determined
          volume_id: "{{ created_clone_block_volume.volume.id }}"
        register: attached_block_device

      - name: Save attachement commands
        set_fact: 
          iscsi_attach_commands: '{{ attached_block_device.volume_attachment.iscsi_attach_commands }}'

- import_playbook: execute_shell_commands_root.yml
  vars:
    instance_ip: '{{ hostvars["localhost"]["replica_ip"] }}'
    bash_commands: '{{ hostvars["localhost"]["iscsi_attach_commands"] }}'

- import_playbook: setup_replication.yml
  vars:
    instance_ip: '{{ hostvars["localhost"]["replica_ip"] }}'
    primary_host: '{{ PRIMARY }}'
    primary_prv_ip: '{{ hostvars["localhost"]["primary_prv_ip"] }}'
    primary_password: '{{ root_password }}'

That playbook will use our final playbook: “setup_replication”, which you can see below:

Setup replication

Setup replication

---

- name: Detach the device
  vars:
    ansible_user: opc
    ansible_become: yes
    ansible_become_method: sudo
  vars_files:
    - /Users/julien.andonov/Work/POCE-playbooks/oci-ansible-collection/samples/compute/venv/variables.yml
    
  hosts: '{{ instance_ip }}'
  tasks:
    - name: Stop Firewall
      service: name=firewalld state=stopped enabled=false

    - name: Get file coordinate
      shell: |
        cat /var/lib/mysql/mysql5/{{ PRIMARY }}-bin.index | tail -1
      register: replication_file

    - name: Get position coordinate
      shell: |
        wc {{ replication_file.stdout }} | awk -F " " '{print $3}'
      register: replication_position

    - name: Change the UUID of MySQL
      shell: |
        newUUID=`uuidgen`
        echo "
        [auto]
        server-uuid=${newUUID}" > /var/lib/mysql/mysql5/auto.cnf

    - name: Start MySQL
      service: name=mysqld state=started enabled=true

    - name: Change primary on Replica
      community.mysql.mysql_replication:
        mode: changeprimary
        primary_host: '{{ primary_prv_ip[0] }}'
        primary_log_file: '{{ replication_file.stdout }}'
        primary_log_pos: '{{ replication_position.stdout }}'
        login_unix_socket: /var/lib/mysql/mysql5/mysql.sock
        login_user: root
        login_password: "{{ root_password }}"
        primary_user: 'replication'
        primary_password: "{{ root_password }}"

    - name: Start replica
      community.mysql.mysql_replication:
        mode: startreplica
        login_unix_socket: /var/lib/mysql/mysql5/mysql.sock
        login_user: root
        login_password: "{{ root_password }}"

With this, we were able to setup replication using OCI and ansible. The resnap can be done as many time as needed. It will always clone the primary's block device and attach it on replica, then it will re-establish replication.

There is one more playbook which we need to add.

detach_iscsi

---

- name: Detach the device
  vars:
    ansible_user: opc
    ansible_become: yes
    ansible_become_method: sudo

  hosts: '{{ instance_ip }}'
  tasks:
    - name: Find the IQN
      shell: |
        iscsiadm --mode session  -P 3 | grep Target: | head | awk -F " " '{print $2}'
      register: iqn

    - name: Stop  MySQL
      service: name=mysqld state=stopped enabled=false
      when: iqn.stdout != ""

    - name: Find possible processes holding the file system
      shell: |
        lsof +D /var/lib/mysql |  awk -F " " '{print $2}' | uniq | grep -v "PID"
      register: locking_processes
      ignore_errors: yes

    - name: Print locking process
      debug:
        msg: '{{ locking_processes }}'
      when: locking_processes.stdout_lines|length != 0

    - name: Kill and unmount the file system
      shell: |
        kill -9 '{{ item }}'
      with_items: '{{ locking_processes.stdout_lines }}'
      when: locking_processes.stdout_lines|length != 0

    - name: Print IQN
      debug:
        msg: '{{iqn.stdout}}'

    - name: Unmount File system
      mount:
        path: /var/lib/mysql
        state: unmounted

    - name: Detach the iSCSI
      community.general.open_iscsi:
        login: no
        target: '{{iqn.stdout}}'
      when: iqn.stdout != ""

That playbook will stop the MySQL and detach the iSCSI from the compute node, so we can safely, remove it from the OCI as well.

MySQL Clusterset Automation

This ansible project consists of 3 roles:

  1. MySQL Instance
  2. MySQL InnoDB Cluster
  3. MySQL Router

That will create architecture with 1 InnoDB Clusterset and a replicaset attached to it. The full code, you can see here:

https://github.com/JulienAndonov/MySQL_Cluster_Ansible