This is an old revision of the document!
Overview
The master instance is the main instance which controls the applications or the containers on the cluster. Don't forget: kubernetes consists of at least 1 master and 2 node. In total 2 machines which can run the application.
So let's initialize the cluster from the master instance:
Initiliaze the cluster
To initialize the cluster, we have to take two factors into consideration:
- Which will be the advertise IP ?
- Which will be the network which we will use for the pods.
The first question is pretty easy. Just use the network which is assigned to your master. In our case, we have 1 master and 2 noides. So we will assign the advertise IP of the master:
- master - 192.168.50.10
- node1 - 192.168.50.11
- node2 - 192.168.50.12
The second question however, depends on the network which will be used for the pods. In our example I have used calico, because of the reasons listed below.Thus, our pod network by default is: 192.168.0.0/16.
So let's see how our commands
Initialize the cluster
root@k8s-master:~# kubeadm init --ignore-preflight-errors=NumCPU --apiserver-advertise-address=192.168.50.10 --pod-network-cidr=192.168.0.0/16 W0421 09:20:50.597038 21388 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.2 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' ************************************************************************************************************************** [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.50.10:6443 --token k7cnjt.c0vkn3i6sc9qp2it \ --discovery-token-ca-cert-hash sha256:8c7874be67b9670c52a729b7a26bdefb4b55f5a49402624c0d262c0253732228 root@k8s-master:~#
After that, we have to perform a couple commands from the user, which will be responsible for the kubernetes and won't be root. (P.S. usage of root for applications is STRONGLY DISCOURAGED because of security stuff :) )
So just transfer it using the instructions above:
Execute as normal User
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Once we have done that, we can check the cluster:
Check the cluster
ubuntu@k8s-master:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady master 62s v1.18.2
Now, you can see that the cluster is saying that the cluster isn't Ready. But what that means, let's see which part isn't ready:
Check cluster components
ubuntu@k8s-master:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-66bff467f8-rgh8d 0/1 Pending 0 62s kube-system coredns-66bff467f8-tql72 0/1 Pending 0 62s kube-system etcd-k8s-master 1/1 Running 0 72s kube-system kube-apiserver-k8s-master 1/1 Running 0 72s kube-system kube-controller-manager-k8s-master 1/1 Running 0 72s kube-system kube-proxy-jkmql 1/1 Running 0 62s kube-system kube-scheduler-k8s-master 1/1 Running 0 72s </Code From this we can see that the CoreDNS isn't ready, meaning our network isn't applied from the steps above: <Code:none|Missed steps> Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
So which podnetwork, we will use. As already mentioned if you are using Kubernetes >1.16, then you cannot use the weave network. Because of that I have used the Calico:
Apply Calico Pod network
ubuntu@k8s-master:~$ kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created ubuntu@k8s-master:~$
After that we can check the components again:
Check cluster components
ubuntu@k8s-master:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-77c5fc8d7f-88lsl 0/1 Pending 0 33s kube-system calico-node-bqw8q 0/1 Init:0/3 0 33s kube-system coredns-66bff467f8-rgh8d 0/1 Pending 0 114s kube-system coredns-66bff467f8-tql72 0/1 Pending 0 114s kube-system etcd-k8s-master 1/1 Running 0 2m4s kube-system kube-apiserver-k8s-master 1/1 Running 0 2m4s kube-system kube-controller-manager-k8s-master 1/1 Running 0 2m4s kube-system kube-proxy-jkmql 1/1 Running 0 114s kube-system kube-scheduler-k8s-master 1/1 Running 0 2m4s
We see they are being Initialized as well: “Init:0/3”, so give them little time to start up. During that time the machine can be very slow, so have little patience. In the end you will see something like that:
Check cluster components
ubuntu@k8s-master:~/.kube$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 56m v1.18.2 ubuntu@k8s-master:~/.kube$ ubuntu@k8s-master:~/.kube$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-77c5fc8d7f-88lsl 1/1 Running 0 54m kube-system calico-node-bqw8q 1/1 Running 0 54m kube-system coredns-66bff467f8-rgh8d 1/1 Running 0 55m kube-system coredns-66bff467f8-tql72 1/1 Running 0 55m kube-system etcd-k8s-master 1/1 Running 0 55m kube-system kube-apiserver-k8s-master 1/1 Running 0 55m kube-system kube-controller-manager-k8s-master 1/1 Running 2 55m kube-system kube-proxy-jkmql 1/1 Running 0 55m kube-system kube-scheduler-k8s-master 1/1 Running 2 55m
That concludes the initialization of the cluster. In the next section we will discuss how to add new nodes :)