=====Overview===== Kubernetes is the next Evolution of Docker swarm, so in order to configure it, we have to firstly configure the Docker (just without the swarm this time) We will configure 3 servers again: * One master * Two Workers Since I hate seeing only Debian based, I will do it in both: Ubuntu and Centos. Firstly we will be the Ubuntu. So let's get going: =====Provision the VMs===== You can return use the first section of the Vagrant advanced configurations or you can continue here, where we will do pretty much the same. I like wasting space :D So let's get going with Vagrant again :) IMAGE_NAME = "bento/ubuntu-16.04" N = 2 Vagrant.configure("2") do |config| config.ssh.insert_key = false config.vm.provider "virtualbox" do |v| v.memory = 1024 v.cpus = 2 end config.vm.define "k8s-master" do |master| master.vm.box = IMAGE_NAME master.vm.network "private_network", ip: "192.168.50.10" master.vm.hostname = "k8s-master" master.vm.provision "ansible" do |ansible| ansible.playbook = "kubernetes-setup/master-playbook.yml" ansible.extra_vars = { node_ip: "192.168.50.10", } end end (1..N).each do |i| config.vm.define "node-#{i}" do |node| node.vm.box = IMAGE_NAME node.vm.network "private_network", ip: "192.168.50.#{i + 10}" node.vm.hostname = "node-#{i}" node.vm.provision "ansible" do |ansible| ansible.playbook = "kubernetes-setup/node-playbook.yml" ansible.extra_vars = { node_ip: "192.168.50.#{i + 10}", } end end end As you can see, we are refering to two files here: * master-playbook.yml - Ansible Playbook for the Master node * node-playbook.yml - Ansible Playbook for the other Nodes. Please create the following files as follows: ====Configure the Master Playbook==== Let's start building the master playbook: --- - hosts: all become: true tasks: - name: Install packages that allow apt to be used over HTTPS apt: name: "{{ packages }}" state: present update_cache: yes vars: packages: - apt-transport-https - ca-certificates - curl - gnupg-agent - software-properties-common - name: Add an apt signing key for Docker apt_key: url: https://download.docker.com/linux/ubuntu/gpg state: present - name: Add apt repository for stable version apt_repository: repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable state: present - name: Install docker and its dependecies apt: name: "{{ packages }}" state: present update_cache: yes vars: packages: - docker-ce - docker-ce-cli - containerd.io notify: - docker status - name: Add vagrant user to docker group user: name: vagrant group: docker - name: Remove swapfile from /etc/fstab mount: name: "{{ item }}" fstype: swap state: absent with_items: - swap - none ===Remove the swap=== Please bare in mind that the kubelet won't install if the swap is enabled, so we have to add the following: - name: Disable swap command: swapoff -a when: ansible_swaptotal_mb > 0 ===Install Kubelet, kubeadm, kubectl=== After that, we can add a task to install the: kubelet, kubeadm and kubectl using the below code. - name: Add an apt signing key for Kubernetes apt_key: url: https://packages.cloud.google.com/apt/doc/apt-key.gpg state: present - name: Adding apt repository for Kubernetes apt_repository: repo: deb https://apt.kubernetes.io/ kubernetes-xenial main state: present filename: kubernetes.list - name: Install Kubernetes binaries apt: name: "{{ packages }}" state: present update_cache: yes vars: packages: - kubelet - kubeadm - kubectl - name: Configure node ip lineinfile: path: /etc/default/kubelet line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }} - name: Restart kubelet service: name: kubelet daemon_reload: yes state: restarted ===Initialize the Kubernetes=== Finally, we can add the Tasks for the Initialization as follows: - name: Initialize the Kubernetes cluster using kubeadm command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --node-name k8s-master --pod-network-cidr=192.168.0.0/16 ===Configure Vagrant User for the Cluster=== Since we are using vagrant, we can setup vagrant to access the Kubernetes Cluster using the following Task: - name: Setup kubeconfig for vagrant user command: "{{ item }}" with_items: - mkdir -p /home/vagrant/.kube - cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config - chown vagrant:vagrant /home/vagrant/.kube/config ===Configure the Network provider and policy engine=== - name: Setup kubeconfig for vagrant user command: "{{ item }}" with_items: - mkdir -p /home/vagrant/.kube - cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config - chown vagrant:vagrant /home/vagrant/.kube/config ====Configure the Node Playbook==== We will setup a join file which will be used in the playbook for the other nodes: - name: Generate join command command: kubeadm token create --print-join-command register: join_command - name: Copy join command to local file local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command" The generated, from Kubernetes, join command will be saved in file called: "join-command" which will be executed by the playbook: ===Configure handlers=== We have to also setup the handlers for checking the Docker daemon: handlers: - name: docker status service: name=docker state=started Finally we can configure the node-playbook.yml - name: Copy the join command to server location copy: src=join-command dest=/tmp/join-command.sh mode=0777 - name: Join the node to cluster command: sh /tmp/join-command.sh