ansible_adanced_oci

This is an old revision of the document!


Ansible and Oracle allow you to automate a lot of things around the OCI. In this module we will address the automation of:

  • Compute Node Creation
  • MySQL Creation
  • MySQL Replication Setup

In later modules, we will make it even better with the introduction of roles. So let's get going. P.S. A disclaimer is in order here….I will not share my compartment or SSH keys…..obviously :D

Ansible has variable of modules for the compute node or for anything in general, but the following playbook automate the creation of a compute node.

Of course you need to create a VCN prior that with the correct name:

Create a compute node

(venv) julien.andonov@julienandonovs-MacBook-Pro venv % ansible-playbook -i oci_inventory.py setup_computenode.yaml --private-key id_oci --extra-vars "hostname=jdbwiki"
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details

PLAY [localhost] **********************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [set_fact] ***********************************************************************************************************************************************************************************************************

Actually that command, created the server on which that page is currently running :D

The playbook you can see below:

setup_computenode.yml

---

- hosts: localhost
  collections:
    - oracle.oci
  vars_files:
    - /Users/julien.andonov/Work/POCE-playbooks/oci-ansible-collection/samples/compute/venv/variables.yml
  tasks:
    - block:
      - name: Get Running Instance ID
        oci_compute_instance_facts:
          compartment_id: "{{ instance_compartment }}"
          lifecycle_state: "RUNNING"
        register: instance_facts

      - name: Launch an instance
        oci_compute_instance:
          availability_domain: "{{ instance_ad }}"
          compartment_id: "{{ instance_compartment }}"
          name: "{{ hostname }}"
          image_id: "{{ instance_image }}"
          shape: "{{ instance_shape }}"
          create_vnic_details:
              assign_public_ip: True
              hostname_label: "{{ hostname }}"
              subnet_id: "{{ instance_subnet_id }}"
          metadata:
              ssh_authorized_keys: "{{ SSH_AUTHORIZED_KEYS }}"
          freeform_tags: "{'tier': 'spares'}"
          wait: True
        when: (instance_facts.instances | regex_findall('{{ hostname }}') | length) == 0
        register: result

      - name: Print Result
        debug:
          msg: "{{ result }}"

      - name: Get the facts when instance is already created
        oci_compute_instance_facts:
          compartment_id: "{{ instance_compartment }}"
          display_name: "{{ hostname }}"
          lifecycle_state: "RUNNING"
        register: instances

      - name: Print Instances
        debug:
          msg: "{{ instances }}"

      - set_fact:
          result: "{{ instances.instances[0] }}"

      - name: Print instance details
        debug:
          msg: "Launched a new instance {{ result.id }}"
      - set_fact:
          instance_id: "{{ result.id }}"
        when: instances is defined

      - name: Get the VNIC attachment details of instance
        oci_compute_vnic_attachment_facts:
          compartment_id: "{{ instance_compartment }}"
          instance_id: "{{ instance_id }}"
        register: result

      - name: Get details of the VNIC
        oci_network_vnic_facts:
          id: "{{ result.vnic_attachments[0].vnic_id }}"
        register: result
      - set_fact:
          instance_public_ip: "{{result.vnic.public_ip}}"

      - name: Print the public ip of the newly launched instance
        debug:
          msg: "Public IP of launched instance {{ instance_public_ip }}"

      - name: Wait (upto 10 minutes) for port 22 to become open
        wait_for:
          port: 22
          host: '{{ instance_public_ip }}'
          state: started
          delay: 10
          timeout: 600
        vars:
          ansible_connection: local

      - set_fact:
          ssh_command_with_generated_key: ssh -i "/Users/julien.andonov/.oci/id_oci" -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" opc@{{ instance_public_ip }} uname -a
      
      - set_fact:
          ssh_command_with_custom_key: ssh -i "/Users/julien.andonov/.oci/id_oci" -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" opc@{{ instance_public_ip }} uname -a

      - name: Attempt a ssh connection to the newly launched instance
        command: "{{ ssh_command_with_custom_key if 'SAMPLE_PUBLIC_SSH_KEY' in ansible_env else ssh_command_with_generated_key }}"
        retries: 3
        delay: 5
        register: result
        until: result.rc == 0

      - name: Print SSH response from launched instance
        debug:
          msg: "SSH response from instance -> {{ result.stdout_lines }}"

      - name: Add host
        add_host:
          name: "{{ hostname }}"
          ansible_ssh_host: "{{ instance_public_ip }}"

So the playbook, won't create a compute node which already exist and it will also verify that the server has been created :)

Now, since OCI stands for Oracle Cloud Interface, well…they have Oracle Linux and AS SUCH, by default it is MySQL 8, so…my first task is to DELETE the MySQL 8 :D in my playbook. Then I install the new packages and so on so far…. :)

MySQL Deployment

ansible-playbook -i oci_inventory.py setup_mysql.yml --private-key id_oci --extra-vars "hostname=db-stg-001" --extra-vars "root_password=UlraSecurePassword123#" --extra-vars "version=5.7

For the people who are wondering what the: “oci_inventory.py” is. It is a file by Oracle, which allows you to create dynamic inventory based on your OCI, so I don't have to keep a track of if a server still exist or I killed it….brutally I might add.

Again, the playbook you can see below:

setup_mysql.yml

---

- hosts: localhost
  collections:
    - oracle.oci
  vars_files:
    - /Users/julien.andonov/Work/POCE-playbooks/oci-ansible-collection/samples/compute/venv/variables.yml

  tasks:
    - block:
      - name: Get instance details
        oci_compute_instance_facts:
          compartment_id: "{{ instance_compartment }}"
          lifecycle_state: "RUNNING"
          display_name: "{{ hostname }}"
        register: instances_details

      - name: Get facts
        debug:
          msg: "{{ instances_details }}"

      - name: Save instance info
        set_fact:
          instance: '{{ instances_details["instances"] }}'
          instance_ips: '{{instances_details["instances"] | map(attribute="primary_public_ip")  }}'

      - name: Add instance(s) to in-memory inventory
        add_host:
          name: '{{ item }}'
        loop: '{{ instance_ips }}'


- name: Configure MySQL
  vars:
    ansible_user: opc
    ansible_become: yes
    ansible_become_method: sudo

  hosts: '{{ hostvars["localhost"]["instance_ips"]}}'
  tasks:
  - block:
    - name: Remove repository (and clean up left-over metadata)
      file:
        path: /etc/yum.repos.d/mysql-ol7.repo
        state: absent 

    - name: Remove MySQL 8.0 packages
      yum:
        name:
          - mysql-community-client-plugins-8*
          - mysql-community-common-8*
          - mysql-community-libs-8*
          - mysql-community-libs-compat-8*
        state: absent

    - name: yum-clean-metadata
      command: yum clean metadata
      args:
        warn: no

    - name: Add repository
      yum_repository:
        name: mysql-ol7-5.7
        baseurl: https://yum$ociregion.$ocidomain/repo/OracleLinux/OL7/MySQL57_community/$basearch/
        description: MySQL 5.7 Repo
        enabled: yes
        gpgcheck: yes
        gpgkey: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

    - name: Install MySQL Packages
      yum:
        name:
          - mysql-community-server
          - mysql-community-client
          - MySQL-python
        state: present

    - name: Stop MySQL
      service: name=mysqld state=stopped enabled=false

    - name: Change ownership of MySQL Directory
      ansible.builtin.file:
        path: /var/lib/mysql/
        owner: mysql
        group: mysql
        mode: 0755

    - name: Re-create the Data Directory
      ansible.builtin.file:
        path: /var/lib/mysql/mysql5/
        state: absent

    - name: Create MySQL Data directory if it doesn't exist
      ansible.builtin.file:
        path: /var/lib/mysql/mysql5/
        owner: mysql
        group: mysql
        state: directory
        mode: 0755

    - name: Create MySQL Log directory if it doesn't exist
      ansible.builtin.file:
        path: /var/log/mysql/
        state: directory
        owner: mysql
        group: mysql
        mode: '0755'

    - name: Create MySQL Log file
      ansible.builtin.file:
        path: /var/log/mysql/slow.log
        state: touch
        owner: mysql
        group: mysql
        mode: '0640'


    - name: Stop MySQL
      service: name=mysqld state=stopped enabled=true

    - name: Get last three characters of a string
      shell: |
        hostname | tail -c 4
      register: server_id

    - name: Modify the config file
      shell: |
        echo "
        [mysqld]
        back_log = 128
        binlog_cache_size = 128K
        binlog_row_image = minimal
        character_set_server = utf8
        collation_server = utf8_bin
        datadir = /var/lib/mysql/mysql5/
        expire_logs_days = 10
        innodb_adaptive_hash_index = 0
        innodb_buffer_pool_instances = 4
        innodb_buffer_pool_size = 9216M
        innodb_data_file_path = ibdata1:10M:autoextend
        innodb_data_home_dir = /var/lib/mysql/mysql5/
        innodb_flush_log_at_trx_commit = 2
        innodb_flush_method = O_DIRECT
        innodb_io_capacity = 300
        innodb_lock_wait_timeout = 120
        innodb_log_file_size = 1G
        innodb_log_files_in_group = 2
        innodb_log_group_home_dir = /var/lib/mysql/mysql5/
        innodb_numa_interleave = 1
        innodb_purge_batch_size = 1000
        innodb_purge_threads = 8
        innodb_read_io_threads = 12
        innodb_tmpdir = /var/lib/mysql/mysql5/
        innodb_write_io_threads = 12
        join_buffer_size = 128K
        key_buffer_size = 32M
        local_infile = 0
        log_bin = /var/lib/mysql/mysql5/{{ hostname }}-bin.log
        log_error = /var/log/mysql/mysql-error.log
        log_slave_updates = 1
        log_slow_admin_statements = 1
        lower_case_table_names = 1
        max_allowed_packet = 32M
        max_connect_errors = 50000
        max_connections = 500
        max_heap_table_size = 64M
        max_relay_log_size = 1G
        open_files_limit = 32768
        read_rnd_buffer_size = 2M
        relay_log = mysql-relay-bin
        relay_log_index = mysql-relay-bin.index
        relay_log_info_repository = TABLE
        relay_log_recovery = ON
        replicate_ignore_db = mysql
        replicate_ignore_table = mysql.user
        replicate_ignore_table = mysql.db
        replicate_ignore_table = mysql.host
        replicate_ignore_table = mysql.tables_priv
        replicate_ignore_table = mysql.columns_priv
        replicate_ignore_table = mysql.procs_priv
        replicate_ignore_table = mysql.proxies_priv
        report_host = {{ hostname }}
        server-id = {{ server_id.stdout }}
        skip_name_resolve = 1
        skip_slave_start = 1
        slave_compressed_protocol = 1
        slave_parallel_type = LOGICAL_CLOCK
        slave_parallel_workers = 2
        slave_preserve_commit_order = 1
        slow_query_log = 1
        slow_query_log_file = /var/log/mysql/slow.log
        socket = /var/lib/mysql/mysql5/mysql.sock
        sort_buffer_size = 32M
        sql_mode = PIPES_AS_CONCAT,NO_AUTO_VALUE_ON_ZERO,TRADITIONAL
        sync_binlog = 0
        sysdate_is_now = 1
        table_open_cache = 8192
        thread_cache_size = 128
        tmp_table_size = 64M
        tmpdir = /tmp
        user = mysql" > /etc/my.cnf

    - name: Start MySQL
      service: name=mysqld state=started enabled=true

    - name: Check if a file exist
      stat:
        path: /root/.my.cnf
      register: mysql_file

    - name: Obtain & Change temporal password 
      shell: |
          temporal_password=`awk '/A temporary password is generated for/ {a=$0} END{ print a }' /var/log/mysql/mysql-error.log | awk '{print $(NF)}'`
          mysql -uroot -p${temporal_password} --connect-expired-password -S /var/lib/mysql/mysql5/mysql.sock -e "ALTER USER 'root'@'localhost' IDENTIFIED BY '{{ root_password }}';flush privileges;
          echo "[client]
          user=root
          password=\"{{ root_password }}\" > /root/.my.cnf
      when: not mysql_file.stat.exists

    - name: Create mysql replication user 
      community.mysql.mysql_user:
        login_user: root
        login_password: "{{ root_password }}"
        login_unix_socket: /var/lib/mysql/mysql5/mysql.sock
        name: replication
        host: '%'
        priv: "*.*:REPLICATION CLIENT, REPLICATION SLAVE"
        password: "{{ root_password }}"

Now, of course, I will change the configuration with a template and so on, but that is for a later module.

  • ansible_adanced_oci.1643577216.txt.gz
  • Last modified: 2022/01/30 21:13
  • by andonovj