docker_advanced_k8s_deploy_pod_deployment

Finally, we managed to come to the atomic unit in Kubernetes: Pods. But what is a pod ? Well it is simple:

  • Hypervisors have VM
  • Docker have Containers
  • Kuberenets have Pods

You can think of a pod of something between container and a VM. Unlike VM they are mortal, when they die, we just bring a new one, unlike containers though, they have IPs. Generally a pod has a single container in it, but there are exceptions. So let's see how it behaves as a VM and a Container:

As we already mentioned, the atomic unit of scheduling in Kubernetes is the Pod, if we want to grow, we add pods, NOT containers. So, how these pods communicate with the outside world. Well, each pod has its own IP. If you remember, when we were configuring the cluster, we configured a pod network and we even configured Calico network to serve as a pod network and routing. (P.S. Calico is using BGP routing to provide path to the pods, you can see how to configure BGP routing protocol in my Cisco Configuration pages) Either way, so, to summarize:

  • InterPod Communication - Via IP from the Pod Network (192.168.0.0 for calico)
  • IntraPod Communication - Via Ports and using localhost as a reference to the pod

I know you are confused, so here is a visual reference:

InterPod Communication

This is the communication between different pods:

IntraPod Communication

This is the communication between different containers within the same pod. Rarely you will see more than one container in a pod, BUT it is possible.

The lifecycle of a pod is the following:

  1. We present a manifest file to the API server
  2. The api server loads the containers, download, install, all that crap and puts it in “pending” state
  3. If it doesn't fail it is put in “Running state”
  4. Again if it doesn't fail it is put in “Succeeded” state
  5. If it fails in any of the previous state, it is put in: “Failed state” and we have to inspect our manifest fail or whatnot. WE DON'T WORK WITH THE FAILED PODS, WE DON'T CARE, THEY ARE DEAD, FINITO :)

Here is a picture for visual reference of the above :

So let's get going, our first order of business is creating a manifest file, so let's use the same app which we dockerized on the docker section:

  • andonovj/httpserverdemo:latest

The manifest file can be YML or JSON, up to you, I have went with the YML because I hate myself :) It is very simple, but working.

Manifest file

apiVersion: v1
kind: Pod
metadata:
  name: date-pod
  labels:
    zone: prod
    version: v1
spec:
  containers:
  - name: hello-ctr
    image: andonovj/httpserverdemo:latest
    ports:
    - containerPort: 8080

So what this YML is telling the API server:

  1. We name our pod: “date-pod” as it shows the current date
  2. We put it two labels, first that it is a prod and second that it is version 1
  3. WE name our container: “Hello-ctr”
  4. We use the image from the docker repo: “andonovj/httpserverdemo:latest”
  5. WE define port 8080 for connection.

So let's create it :)

The creation of the pod is done using the kubectl and using the YML file (in my case pod.yml)

Create Pod

ubuntu@k8s-master:~$ kubectl create -f pod.yml
pod/date-pod created
ubuntu@k8s-master:~$ kubectl get pods
NAME       READY   STATUS              RESTARTS   AGE
date-pod   0/1     ContainerCreating   0          17s

Describe a pod

Now, it says that it is ContinerCreating mode, but that is more of explanation of why it is in Pending state than a status. We can check a more detailed status using again kubectl, but with the describe option:

Describe a pod

ubuntu@k8s-master:~$ kubectl describe pods
Name:         date-pod
Namespace:    default
Priority:     0
Node:         node-2/10.0.2.15
Start Time:   Fri, 01 May 2020 15:41:40 +0000
Labels:       version=v1
              zone=prod
Annotations:  cni.projectcalico.org/podIP: 192.168.247.1/32
Status:       Running
IP:           192.168.247.1
IPs:
  IP:  192.168.247.1
Containers:
  hello-ctr:
    Container ID:   docker://0709a53a0b8b3d05830ae6a08976ef50df520229b1e37fe08809d53e31e5e489
    Image:          andonovj/httpserverdemo:latest
    Image ID:       docker-pullable://andonovj/httpserverdemo@sha256:5e0866ff45e12c8e350923fbe32d94bd76bd2d1576722d4d55ca786043bfcbe1
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 01 May 2020 15:42:05 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jc2sg (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-jc2sg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-jc2sg
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned default/date-pod to node-2
  Normal  Pulling    42s        kubelet, node-2    Pulling image "andonovj/httpserverdemo:latest"
  Normal  Pulled     21s        kubelet, node-2    Successfully pulled image "andonovj/httpserverdemo:latest"
  Normal  Created    21s        kubelet, node-2    Created container hello-ctr
  Normal  Started    21s        kubelet, node-2    Started container hello-ctr

Congrats, that was your first manual pod deployment.

So we know that a pod runs one, in some cases more, container(s). We can check our servers, in my case, it was on the 2nd Worker:

Check the container

root@node-2:~# docker container ls
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS               NAMES
0709a53a0b8b        andonovj/httpserverdemo   "dotnet HttpServerDe…"   25 minutes ago      Up 25 minutes                           k8s_hello-ctr_date-pod_default_2445c62d-2329-40ba-b026-e2c98031366c_0
root@node-2:~#

So we have 1 requested image running and 1 is running on the 2nd worker. What if we want to delete a pod. Well very simple:

To delete a pod, again use the kubectl command as follows:

Delete a pod

ubuntu@k8s-master:~$ kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
date-pod   1/1     Running   0          38m
ubuntu@k8s-master:~$ kubectl delete pod date-pod
pod "date-pod" deleted
ubuntu@k8s-master:~$

But what if we want to have more security and load balancing. Let's say, what if we want 5 replica of that pod. Well, that is for the replication controller which we will discsuss in the next section.

  • docker_advanced_k8s_deploy_pod_deployment.txt
  • Last modified: 2020/05/01 16:54
  • by andonovj