docker_advanced_k8s_init

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
docker_advanced_k8s_init [2020/04/20 11:53] – created andonovjdocker_advanced_k8s_init [2020/05/02 14:13] (current) andonovj
Line 1: Line 1:
 =====Overview===== =====Overview=====
-The master instance is the main instance which controls the applications or the containers on the cluster. Don't forgetkubernetes consists of at least 1 master and 2 node. In total 2 machines which can run the application.+The master instance is the main instance which controls the applications or the containers on the cluster. Don't forget, in our case, kubernetes consists of at least 1 master and 2 node. In total 2 machines which can run the application. 
 + 
 +So let's initialize the cluster from the master instance: 
 + 
 + 
 +=====Initiliaze the cluster===== 
 +To initialize the cluster, we have to take two factors into consideration: 
 + 
 +  - Which will be the advertise IP ? 
 +  - Which will be the network which we will use for the pods. 
 + 
 +The first question is pretty easy. Just use the network which is assigned to your master. In our case, we have 1 master and 2 noides. 
 +So we will assign the advertise IP of the master: 
 + 
 +  * master - 192.168.50.10 
 +  * node1 - 192.168.50.11 
 +  * node2 - 192.168.50.12 
 + 
 +The second question however, depends on the network which will be used for the pods. In our example I have used calico, because of the reasons listed below.Thus, our pod network by default is: 192.168.0.0/16. 
 + 
 +So let's see how our commands 
 + 
 +<Code:none|Initialize the cluster> 
 +root@k8s-master:~# kubeadm init --ignore-preflight-errors=NumCPU --apiserver-advertise-address=192.168.50.10 --pod-network-cidr=192.168.0.0/16 
 +W0421 09:20:50.597038   21388 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] 
 +[init] Using Kubernetes version: v1.18.2 
 +[preflight] Running pre-flight checks 
 +        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ 
 +[preflight] Pulling images required for setting up a Kubernetes cluster 
 +[preflight] This might take a minute or two, depending on the speed of your internet connection 
 +[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 
 +************************************************************************************************************************** 
 +[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key 
 +[addons] Applied essential addon: CoreDNS 
 +[addons] Applied essential addon: kube-proxy 
 + 
 +Your Kubernetes control-plane has initialized successfully! 
 + 
 +To start using your cluster, you need to run the following as a regular user: 
 + 
 +  mkdir -p $HOME/.kube 
 +  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
 +  sudo chown $(id -u):$(id -g) $HOME/.kube/config 
 + 
 +You should now deploy a pod network to the cluster. 
 +Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: 
 +  https://kubernetes.io/docs/concepts/cluster-administration/addons/ 
 + 
 +Then you can join any number of worker nodes by running the following on each as root: 
 + 
 +kubeadm join 192.168.50.10:6443 --token k7cnjt.c0vkn3i6sc9qp2it \ 
 +    --discovery-token-ca-cert-hash sha256:8c7874be67b9670c52a729b7a26bdefb4b55f5a49402624c0d262c0253732228 
 +root@k8s-master:~# 
 +</Code> 
 + 
 +After that, we have to perform a couple commands from the user, which will be responsible for the kubernetes and won't be root. (P.S. usage of root for applications is STRONGLY DISCOURAGED because of security stuff :) ) 
 + 
 +So just transfer it using the instructions above: 
 + 
 +<Code:none|Execute as normal User> 
 +mkdir -p $HOME/.kube 
 +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
 +sudo chown $(id -u):$(id -g) $HOME/.kube/config 
 +</Code> 
 + 
 +Once we have done that, we can check the cluster: 
 + 
 +<Code:shell|Check the cluster> 
 +ubuntu@k8s-master:~$ kubectl get nodes 
 +NAME         STATUS     ROLES    AGE   VERSION 
 +k8s-master   NotReady   master   62s   v1.18.2 
 +</Code> 
 + 
 +Now, you can see that the cluster is saying that the cluster isn't Ready. But what that means, let's see which part isn't ready: 
 + 
 +<Code:none|Check cluster components> 
 +ubuntu@k8s-master:~$ kubectl get pods --all-namespaces 
 +NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE 
 +kube-system   coredns-66bff467f8-rgh8d             0/    Pending            62s 
 +kube-system   coredns-66bff467f8-tql72             0/    Pending            62s 
 +kube-system   etcd-k8s-master                      1/1     Running            72s 
 +kube-system   kube-apiserver-k8s-master            1/1     Running            72s 
 +kube-system   kube-controller-manager-k8s-master   1/    Running            72s 
 +kube-system   kube-proxy-jkmql                     1/    Running            62s 
 +kube-system   kube-scheduler-k8s-master            1/1     Running            72s 
 +</Code 
 + 
 +From this we can see that the CoreDNS isn't ready, meaning our network isn't applied from the steps above: 
 +<Code:none|Missed steps> 
 +Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: 
 +  https://kubernetes.io/docs/concepts/cluster-administration/addons/ 
 +</Code> 
 + 
 +====Configure Calico Pod Network==== 
 +So which podnetwork, we will use. As already mentioned if you are using Kubernetes >1.16, then you cannot use the weave network. Because of that I had to use Calico: 
 + 
 +<Code:shell|Apply Calico Pod network> 
 +ubuntu@k8s-master:~$ kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml 
 +configmap/calico-config created 
 +customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created 
 +clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created 
 +clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created 
 +clusterrole.rbac.authorization.k8s.io/calico-node created 
 +clusterrolebinding.rbac.authorization.k8s.io/calico-node created 
 +daemonset.apps/calico-node created 
 +serviceaccount/calico-node created 
 +deployment.apps/calico-kube-controllers created 
 +serviceaccount/calico-kube-controllers created 
 +ubuntu@k8s-master:~$  
 +</Code> 
 + 
 +After that we can check the components again: 
 + 
 +<Code:shell|Check cluster components> 
 +ubuntu@k8s-master:~$  kubectl get pods --all-namespaces 
 +NAMESPACE     NAME                                       READY   STATUS     RESTARTS   AGE 
 +kube-system   calico-kube-controllers-77c5fc8d7f-88lsl   0/    Pending    0          33s 
 +kube-system   calico-node-bqw8q                          0/1     Init:0/           33s 
 +kube-system   coredns-66bff467f8-rgh8d                   0/    Pending    0          114s 
 +kube-system   coredns-66bff467f8-tql72                   0/    Pending    0          114s 
 +kube-system   etcd-k8s-master                            1/1     Running    0          2m4s 
 +kube-system   kube-apiserver-k8s-master                  1/1     Running    0          2m4s 
 +kube-system   kube-controller-manager-k8s-master         1/    Running    0          2m4s 
 +kube-system   kube-proxy-jkmql                           1/    Running    0          114s 
 +kube-system   kube-scheduler-k8s-master                  1/1     Running    0          2m4s 
 +</Code> 
 + 
 + 
 +We see they are being Initialized as well: "Init:0/3", so give them little time to start up. During that time the machine can be very slow, so have little patience. In the end you will see something like that: 
 + 
 +<Code:shell|Check cluster components> 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE   VERSION 
 +k8s-master   Ready    master   56m   v1.18.2 
 +ubuntu@k8s-master:~/.kube$ 
 +ubuntu@k8s-master:~/.kube$ kubectl get pods --all-namespaces 
 +NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE 
 +kube-system   calico-kube-controllers-77c5fc8d7f-88lsl   1/    Running            54m 
 +kube-system   calico-node-bqw8q                          1/1     Running            54m 
 +kube-system   coredns-66bff467f8-rgh8d                   1/    Running            55m 
 +kube-system   coredns-66bff467f8-tql72                   1/    Running            55m 
 +kube-system   etcd-k8s-master                            1/1     Running            55m 
 +kube-system   kube-apiserver-k8s-master                  1/1     Running            55m 
 +kube-system   kube-controller-manager-k8s-master         1/    Running            55m 
 +kube-system   kube-proxy-jkmql                           1/    Running            55m 
 +kube-system   kube-scheduler-k8s-master                  1/1     Running            55m 
 +</Code> 
 + 
 +That concludes the initialization of the cluster. In the next section we will discuss how to add new nodes :) 
 + 
 +=====Joint to the cluster===== 
 +Be sure that you installed the necessary packages from the introduction section. Once this is done we can add the node to the cluster as follow: 
 + 
 +<Code:shell|Add node> 
 +root@node-1:~# kubeadm join 192.168.50.10:6443 --token k7cnjt.c0vkn3i6sc9qp2it --discovery-token-ca-cert-hash sha256:8c7874be67b9670c52a729b7a26bdefb4b55f5a49402624c0d262c0253732228 
 +W0421 10:28:13.551137   21280 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. 
 +[preflight] Running pre-flight checks 
 +        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ 
 +[preflight] Reading configuration from the cluster... 
 +[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' 
 +[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace 
 +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 
 +[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 
 +[kubelet-start] Starting the kubelet 
 +[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... 
 + 
 +This node has joined the cluster: 
 +* Certificate signing request was sent to apiserver and a response was received. 
 +* The Kubelet was informed of the new secure connection details. 
 + 
 +Run 'kubectl get nodes' on the control-plane to see this node join the cluster. 
 + 
 +root@node-1:~# 
 +</Code> 
 + 
 +As with the master node, it might take sometime until you see the node as Ready and all components running from the Control Panel Machine: 
 + 
 +<Code:shell|Check the newly added Node> 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE   VERSION 
 +k8s-master   Ready    master   67m   v1.18.2 
 +node-1       Ready    <none>   82s   v1.18.2 
 +ubuntu@k8s-master:~/.kube$ 
 +ubuntu@k8s-master:~/.kube$ kubectl get pods --all-namespaces 
 +NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE 
 +kube-system   calico-kube-controllers-77c5fc8d7f-88lsl   1/    Running            65m 
 +kube-system   calico-node-bqw8q                          1/1     Running            65m 
 +kube-system   calico-node-wwfc5                          0/1     Running            75s 
 +kube-system   coredns-66bff467f8-rgh8d                   1/    Running            67m 
 +kube-system   coredns-66bff467f8-tql72                   1/    Running            67m 
 +kube-system   etcd-k8s-master                            1/1     Running            67m 
 +kube-system   kube-apiserver-k8s-master                  1/1     Running            67m 
 +kube-system   kube-controller-manager-k8s-master         1/    Running            67m 
 +kube-system   kube-proxy-hnmxb                           1/    Running            75s 
 +kube-system   kube-proxy-jkmql                           1/    Running            67m 
 +kube-system   kube-scheduler-k8s-master                  1/1     Running            67m 
 +</Code> 
 + 
 +Please execute that step on all nodes. In the end you should have something like this: 
 + 
 +<Code:shell|Check the newly added Node> 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE   VERSION 
 +k8s-master   Ready    master   77m   v1.18.2 
 +node-1       Ready    <none>   11m   v1.18.2 
 +node-2       Ready    <none>   88s   v1.18.2 
 +ubuntu@k8s-master:~/.kube$ kubectl get pods --all-namespaces 
 +NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE 
 +kube-system   calico-kube-controllers-77c5fc8d7f-88lsl   1/    Running            75m 
 +kube-system   calico-node-bqw8q                          1/1     Running            75m 
 +kube-system   calico-node-fl6ft                          1/1     Running            84s 
 +kube-system   calico-node-wwfc5                          1/1     Running            11m 
 +kube-system   coredns-66bff467f8-rgh8d                   1/    Running            77m 
 +kube-system   coredns-66bff467f8-tql72                   1/    Running            77m 
 +kube-system   etcd-k8s-master                            1/1     Running            77m 
 +kube-system   kube-apiserver-k8s-master                  1/1     Running            77m 
 +kube-system   kube-controller-manager-k8s-master         1/    Running            77m 
 +kube-system   kube-proxy-hnmxb                           1/    Running            11m 
 +kube-system   kube-proxy-jkmql                           1/    Running            77m 
 +kube-system   kube-proxy-s4nrh                           1/    Running            84s 
 +kube-system   kube-scheduler-k8s-master                  1/1     Running            77m 
 +</Code> 
 + 
 +====Assign role to a Node==== 
 +You saw that our nodes have no roles. We have 1 master and that is that :) 
 + 
 +<Code:none|Nodes' roles> 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE   VERSION 
 +k8s-master   Ready    master   77m   v1.18.2 
 +node-1       Ready    <none>   11m   v1.18.2 
 +node-2       Ready    <none>   88s   v1.18.2 
 +</Code 
 + 
 +So, how to assign roles to the node. Well, in Kubernetes, we assign labels. Labels are assigned as follows: 
 + 
 +<Code:none|Assign label> 
 +kubectl label node <node name> node-role.kubernetes.io/<role name>=<key - (any name)> - To assign the label 
 +kubectl label node <node name> node-role.kubernetes.io/<role name> - To remove the label 
 +</Code> 
 + 
 +So let's assign worker to our node-1 and node-2 
 + 
 +<Code:shell|Assign Labels to Node-1 and Node-2> 
 +ubuntu@k8s-master:~/.kube$ kubectl label node node-1 node-role.kubernetes.io/worker=worker 
 +node/node-1 labeled 
 +ubuntu@k8s-master:~/.kube$ kubectl label node node-2 node-role.kubernetes.io/worker=worker 
 +node/node-2 labeled 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE     VERSION 
 +k8s-master   Ready    master   83m     v1.18.2 
 +node-1       Ready    worker   17m     v1.18.2 
 +node-2       Ready    worker   7m39s   v1.18.2 
 +ubuntu@k8s-master:~/.kube$ 
 +</Code> 
 + 
 +Alternatively we can remove a label from a node. So let's remove and add that label again on Node-2: 
 + 
 +<Code:shell|Remove and Add Label on Node-2> 
 +ubuntu@k8s-master:~/.kube$ kubectl label node node-2 node-role.kubernetes.io/worker- 
 +node/node-2 labeled 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE   VERSION 
 +k8s-master   Ready    master   86m   v1.18.2 
 +node-1       Ready    worker   20m   v1.18.2 
 +node-2       Ready    <none>   10m   v1.18.2 
 +ubuntu@k8s-master:~/.kube$ kubectl label node node-2 node-role.kubernetes.io/worker=worker 
 +node/node-2 labeled 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE   VERSION 
 +k8s-master   Ready    master   87m   v1.18.2 
 +node-1       Ready    worker   20m   v1.18.2 
 +node-2       Ready    worker   11m   v1.18.2 
 +ubuntu@k8s-master:~/.kube$ 
 +</Code>
  
-So let's initialize the cluster 
  • docker_advanced_k8s_init.1587383626.txt.gz
  • Last modified: 2020/04/20 11:53
  • by andonovj