docker_advanced_k8s_init

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
docker_advanced_k8s_init [2020/04/20 11:55] andonovjdocker_advanced_k8s_init [2020/05/02 14:13] (current) andonovj
Line 1: Line 1:
 =====Overview===== =====Overview=====
-The master instance is the main instance which controls the applications or the containers on the cluster. Don't forgetkubernetes consists of at least 1 master and 2 node. In total 2 machines which can run the application.+The master instance is the main instance which controls the applications or the containers on the cluster. Don't forget, in our case, kubernetes consists of at least 1 master and 2 node. In total 2 machines which can run the application.
  
 So let's initialize the cluster from the master instance: So let's initialize the cluster from the master instance:
  
  
-=====Initialize the cluster===== +=====Initiliaze the cluster===== 
-<Code:shell|Initliaze the cluster> +To initialize the cluster, we have to take two factors into consideration: 
-root@k8s-master:~# kubeadm init + 
-W0420 11:51:44.473212    6005 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]+  - Which will be the advertise IP ? 
 +  - Which will be the network which we will use for the pods. 
 + 
 +The first question is pretty easy. Just use the network which is assigned to your master. In our case, we have 1 master and 2 noides. 
 +So we will assign the advertise IP of the master: 
 + 
 +  * master - 192.168.50.10 
 +  * node1 - 192.168.50.11 
 +  * node2 - 192.168.50.12 
 + 
 +The second question however, depends on the network which will be used for the pods. In our example I have used calico, because of the reasons listed below.Thus, our pod network by default is: 192.168.0.0/16. 
 + 
 +So let's see how our commands 
 + 
 +<Code:none|Initialize the cluster> 
 +root@k8s-master:~# kubeadm init --ignore-preflight-errors=NumCPU --apiserver-advertise-address=192.168.50.10 --pod-network-cidr=192.168.0.0/16 
 +W0421 09:20:50.597038   21388 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
 [init] Using Kubernetes version: v1.18.2 [init] Using Kubernetes version: v1.18.2
 [preflight] Running pre-flight checks [preflight] Running pre-flight checks
Line 15: Line 31:
 [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] This might take a minute or two, depending on the speed of your internet connection
 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
-[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" +**************************************************************************************************************************
-[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" +
-[kubelet-start] Starting the kubelet +
-[certs] Using certificateDir folder "/etc/kubernetes/pki" +
-[certs] Generating "ca" certificate and key +
-[certs] Generating "apiserver" certificate and key +
-[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15] +
-[certs] Generating "apiserver-kubelet-client" certificate and key +
-[certs] Generating "front-proxy-ca" certificate and key +
-[certs] Generating "front-proxy-client" certificate and key +
-[certs] Generating "etcd/ca" certificate and key +
-[certs] Generating "etcd/server" certificate and key +
-[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.2.15 127.0.0.1 ::1] +
-[certs] Generating "etcd/peer" certificate and key +
-[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.2.15 127.0.0.1 ::1] +
-[certs] Generating "etcd/healthcheck-client" certificate and key +
-[certs] Generating "apiserver-etcd-client" certificate and key +
-[certs] Generating "sa" key and public key +
-[kubeconfig] Using kubeconfig folder "/etc/kubernetes" +
-[kubeconfig] Writing "admin.conf" kubeconfig file +
-[kubeconfig] Writing "kubelet.conf" kubeconfig file +
-[kubeconfig] Writing "controller-manager.conf" kubeconfig file +
-[kubeconfig] Writing "scheduler.conf" kubeconfig file +
-[control-plane] Using manifest folder "/etc/kubernetes/manifests" +
-[control-plane] Creating static Pod manifest for "kube-apiserver" +
-[control-plane] Creating static Pod manifest for "kube-controller-manager" +
-W0420 11:52:41.985497    6005 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" +
-[control-plane] Creating static Pod manifest for "kube-scheduler" +
-W0420 11:52:41.986937    6005 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" +
-[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" +
-[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s +
-[apiclient] All control plane components are healthy after 33.018814 seconds +
-[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace +
-[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster +
-[upload-certs] Skipping phase. Please see --upload-certs +
-[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''" +
-[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] +
-[bootstrap-token] Using token: q331e1.bivia9jev4bvugpg +
-[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles +
-[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes +
-[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials +
-[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token +
-[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster +
-[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace+
 [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
 [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: CoreDNS
Line 77: Line 50:
 Then you can join any number of worker nodes by running the following on each as root: Then you can join any number of worker nodes by running the following on each as root:
  
-kubeadm join 10.0.2.15:6443 --token q331e1.bivia9jev4bvugpg +kubeadm join 192.168.50.10:6443 --token k7cnjt.c0vkn3i6sc9qp2it 
-    --discovery-token-ca-cert-hash sha256:5a1ac41454c7121422e6fb974e61533477cdbfcdc002adc17eb34aaed320e7d1+    --discovery-token-ca-cert-hash sha256:8c7874be67b9670c52a729b7a26bdefb4b55f5a49402624c0d262c0253732228
 root@k8s-master:~# root@k8s-master:~#
 </Code> </Code>
  
-The last commandgiven to us, is the join command as you can see. That command should be issues from the other members of the clusterSomething which we will see in the next section.+After thatwe have to perform a couple commands from the userwhich will be responsible for the kubernetes and won't be root. (P.S. usage of root for applications is STRONGLY DISCOURAGED because of security stuff :) ) 
 + 
 +So just transfer it using the instructions above: 
 + 
 +<Code:none|Execute as normal User> 
 +mkdir -p $HOME/.kube 
 +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
 +sudo chown $(id -u):$(id -g) $HOME/.kube/config 
 +</Code> 
 + 
 +Once we have done that, we can check the cluster: 
 + 
 +<Code:shell|Check the cluster> 
 +ubuntu@k8s-master:~$ kubectl get nodes 
 +NAME         STATUS     ROLES    AGE   VERSION 
 +k8s-master   NotReady   master   62s   v1.18.2 
 +</Code> 
 + 
 +Now, you can see that the cluster is saying that the cluster isn't ReadyBut what that means, let's see which part isn't ready: 
 + 
 +<Code:none|Check cluster components> 
 +ubuntu@k8s-master:~$ kubectl get pods --all-namespaces 
 +NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE 
 +kube-system   coredns-66bff467f8-rgh8d             0/    Pending            62s 
 +kube-system   coredns-66bff467f8-tql72             0/    Pending            62s 
 +kube-system   etcd-k8s-master                      1/1     Running            72s 
 +kube-system   kube-apiserver-k8s-master            1/1     Running            72s 
 +kube-system   kube-controller-manager-k8s-master   1/    Running            72s 
 +kube-system   kube-proxy-jkmql                     1/    Running            62s 
 +kube-system   kube-scheduler-k8s-master            1/1     Running            72s 
 +</Code 
 + 
 +From this we can see that the CoreDNS isn't ready, meaning our network isn't applied from the steps above: 
 +<Code:none|Missed steps> 
 +Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: 
 +  https://kubernetes.io/docs/concepts/cluster-administration/addons/ 
 +</Code> 
 + 
 +====Configure Calico Pod Network==== 
 +So which podnetwork, we will use. As already mentioned if you are using Kubernetes >1.16, then you cannot use the weave network. Because of that I had to use Calico: 
 + 
 +<Code:shell|Apply Calico Pod network> 
 +ubuntu@k8s-master:~$ kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml 
 +configmap/calico-config created 
 +customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created 
 +customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created 
 +clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created 
 +clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created 
 +clusterrole.rbac.authorization.k8s.io/calico-node created 
 +clusterrolebinding.rbac.authorization.k8s.io/calico-node created 
 +daemonset.apps/calico-node created 
 +serviceaccount/calico-node created 
 +deployment.apps/calico-kube-controllers created 
 +serviceaccount/calico-kube-controllers created 
 +ubuntu@k8s-master:~$  
 +</Code> 
 + 
 +After that we can check the components again: 
 + 
 +<Code:shell|Check cluster components> 
 +ubuntu@k8s-master:~$  kubectl get pods --all-namespaces 
 +NAMESPACE     NAME                                       READY   STATUS     RESTARTS   AGE 
 +kube-system   calico-kube-controllers-77c5fc8d7f-88lsl   0/    Pending    0          33s 
 +kube-system   calico-node-bqw8q                          0/1     Init:0/           33s 
 +kube-system   coredns-66bff467f8-rgh8d                   0/    Pending    0          114s 
 +kube-system   coredns-66bff467f8-tql72                   0/    Pending    0          114s 
 +kube-system   etcd-k8s-master                            1/1     Running    0          2m4s 
 +kube-system   kube-apiserver-k8s-master                  1/1     Running    0          2m4s 
 +kube-system   kube-controller-manager-k8s-master         1/    Running    0          2m4s 
 +kube-system   kube-proxy-jkmql                           1/    Running    0          114s 
 +kube-system   kube-scheduler-k8s-master                  1/1     Running    0          2m4s 
 +</Code> 
 + 
 + 
 +We see they are being Initialized as well: "Init:0/3", so give them little time to start up. During that time the machine can be very slow, so have little patience. In the end you will see something like that: 
 + 
 +<Code:shell|Check cluster components> 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE   VERSION 
 +k8s-master   Ready    master   56m   v1.18.2 
 +ubuntu@k8s-master:~/.kube$ 
 +ubuntu@k8s-master:~/.kube$ kubectl get pods --all-namespaces 
 +NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE 
 +kube-system   calico-kube-controllers-77c5fc8d7f-88lsl   1/    Running            54m 
 +kube-system   calico-node-bqw8q                          1/1     Running            54m 
 +kube-system   coredns-66bff467f8-rgh8d                   1/    Running            55m 
 +kube-system   coredns-66bff467f8-tql72                   1/    Running            55m 
 +kube-system   etcd-k8s-master                            1/1     Running            55m 
 +kube-system   kube-apiserver-k8s-master                  1/1     Running            55m 
 +kube-system   kube-controller-manager-k8s-master         1/    Running            55m 
 +kube-system   kube-proxy-jkmql                           1/    Running            55m 
 +kube-system   kube-scheduler-k8s-master                  1/1     Running            55m 
 +</Code> 
 + 
 +That concludes the initialization of the cluster. In the next section we will discuss how to add new nodes :) 
 + 
 +=====Joint to the cluster===== 
 +Be sure that you installed the necessary packages from the introduction section. Once this is done we can add the node to the cluster as follow: 
 + 
 +<Code:shell|Add node> 
 +root@node-1:~# kubeadm join 192.168.50.10:6443 --token k7cnjt.c0vkn3i6sc9qp2it --discovery-token-ca-cert-hash sha256:8c7874be67b9670c52a729b7a26bdefb4b55f5a49402624c0d262c0253732228 
 +W0421 10:28:13.551137   21280 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. 
 +[preflight] Running pre-flight checks 
 +        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ 
 +[preflight] Reading configuration from the cluster... 
 +[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' 
 +[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace 
 +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 
 +[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 
 +[kubelet-start] Starting the kubelet 
 +[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... 
 + 
 +This node has joined the cluster: 
 +* Certificate signing request was sent to apiserver and a response was received. 
 +* The Kubelet was informed of the new secure connection details. 
 + 
 +Run 'kubectl get nodes' on the control-plane to see this node join the cluster. 
 + 
 +root@node-1:~# 
 +</Code> 
 + 
 +As with the master node, it might take sometime until you see the node as Ready and all components running from the Control Panel Machine: 
 + 
 +<Code:shell|Check the newly added Node> 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE   VERSION 
 +k8s-master   Ready    master   67m   v1.18.2 
 +node-1       Ready    <none>   82s   v1.18.2 
 +ubuntu@k8s-master:~/.kube$ 
 +ubuntu@k8s-master:~/.kube$ kubectl get pods --all-namespaces 
 +NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE 
 +kube-system   calico-kube-controllers-77c5fc8d7f-88lsl   1/    Running            65m 
 +kube-system   calico-node-bqw8q                          1/1     Running            65m 
 +kube-system   calico-node-wwfc5                          0/1     Running            75s 
 +kube-system   coredns-66bff467f8-rgh8d                   1/    Running            67m 
 +kube-system   coredns-66bff467f8-tql72                   1/    Running            67m 
 +kube-system   etcd-k8s-master                            1/1     Running            67m 
 +kube-system   kube-apiserver-k8s-master                  1/1     Running            67m 
 +kube-system   kube-controller-manager-k8s-master         1/    Running            67m 
 +kube-system   kube-proxy-hnmxb                           1/    Running            75s 
 +kube-system   kube-proxy-jkmql                           1/    Running            67m 
 +kube-system   kube-scheduler-k8s-master                  1/1     Running            67m 
 +</Code> 
 + 
 +Please execute that step on all nodes. In the end you should have something like this: 
 + 
 +<Code:shell|Check the newly added Node> 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE   VERSION 
 +k8s-master   Ready    master   77m   v1.18.2 
 +node-1       Ready    <none>   11m   v1.18.2 
 +node-2       Ready    <none>   88s   v1.18.2 
 +ubuntu@k8s-master:~/.kube$ kubectl get pods --all-namespaces 
 +NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE 
 +kube-system   calico-kube-controllers-77c5fc8d7f-88lsl   1/    Running            75m 
 +kube-system   calico-node-bqw8q                          1/1     Running            75m 
 +kube-system   calico-node-fl6ft                          1/1     Running            84s 
 +kube-system   calico-node-wwfc5                          1/1     Running            11m 
 +kube-system   coredns-66bff467f8-rgh8d                   1/    Running            77m 
 +kube-system   coredns-66bff467f8-tql72                   1/    Running            77m 
 +kube-system   etcd-k8s-master                            1/1     Running            77m 
 +kube-system   kube-apiserver-k8s-master                  1/1     Running            77m 
 +kube-system   kube-controller-manager-k8s-master         1/    Running            77m 
 +kube-system   kube-proxy-hnmxb                           1/    Running            11m 
 +kube-system   kube-proxy-jkmql                           1/    Running            77m 
 +kube-system   kube-proxy-s4nrh                           1/    Running            84s 
 +kube-system   kube-scheduler-k8s-master                  1/1     Running            77m 
 +</Code> 
 + 
 +====Assign role to a Node==== 
 +You saw that our nodes have no roles. We have 1 master and that is that :) 
 + 
 +<Code:none|Nodes' roles> 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE   VERSION 
 +k8s-master   Ready    master   77m   v1.18.2 
 +node-1       Ready    <none>   11m   v1.18.2 
 +node-2       Ready    <none>   88s   v1.18.2 
 +</Code 
 + 
 +So, how to assign roles to the node. Well, in Kubernetes, we assign labels. Labels are assigned as follows: 
 + 
 +<Code:none|Assign label> 
 +kubectl label node <node name> node-role.kubernetes.io/<role name>=<key - (any name)> - To assign the label 
 +kubectl label node <node name> node-role.kubernetes.io/<role name> - To remove the label 
 +</Code> 
 + 
 +So let's assign worker to our node-1 and node-2 
 + 
 +<Code:shell|Assign Labels to Node-1 and Node-2> 
 +ubuntu@k8s-master:~/.kube$ kubectl label node node-1 node-role.kubernetes.io/worker=worker 
 +node/node-1 labeled 
 +ubuntu@k8s-master:~/.kube$ kubectl label node node-2 node-role.kubernetes.io/worker=worker 
 +node/node-2 labeled 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE     VERSION 
 +k8s-master   Ready    master   83m     v1.18.2 
 +node-1       Ready    worker   17m     v1.18.2 
 +node-2       Ready    worker   7m39s   v1.18.2 
 +ubuntu@k8s-master:~/.kube$ 
 +</Code> 
 + 
 +Alternatively we can remove a label from a node. So let's remove and add that label again on Node-2: 
 + 
 +<Code:shell|Remove and Add Label on Node-2> 
 +ubuntu@k8s-master:~/.kube$ kubectl label node node-2 node-role.kubernetes.io/worker- 
 +node/node-2 labeled 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE   VERSION 
 +k8s-master   Ready    master   86m   v1.18.2 
 +node-1       Ready    worker   20m   v1.18.2 
 +node-2       Ready    <none>   10m   v1.18.2 
 +ubuntu@k8s-master:~/.kube$ kubectl label node node-2 node-role.kubernetes.io/worker=worker 
 +node/node-2 labeled 
 +ubuntu@k8s-master:~/.kube$ kubectl get nodes 
 +NAME         STATUS   ROLES    AGE   VERSION 
 +k8s-master   Ready    master   87m   v1.18.2 
 +node-1       Ready    worker   20m   v1.18.2 
 +node-2       Ready    worker   11m   v1.18.2 
 +ubuntu@k8s-master:~/.kube$ 
 +</Code> 
  • docker_advanced_k8s_init.1587383743.txt.gz
  • Last modified: 2020/04/20 11:55
  • by andonovj