Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
docker_advanced_k8s_deploy_pod_deployment [2020/05/01 14:37] – [Lifecycle] andonovj | docker_advanced_k8s_deploy_pod_deployment [2020/05/01 16:54] (current) – andonovj | ||
---|---|---|---|
Line 7: | Line 7: | ||
You can think of a pod of something between container and a VM. Unlike VM they are mortal, when they die, we just bring a new one, unlike containers though, they have IPs. Generally a pod has a single container in it, but there are exceptions. | You can think of a pod of something between container and a VM. Unlike VM they are mortal, when they die, we just bring a new one, unlike containers though, they have IPs. Generally a pod has a single container in it, but there are exceptions. | ||
+ | So let's see how it behaves as a VM and a Container: | ||
+ | ====Network==== | ||
+ | As we already mentioned, the atomic unit of scheduling in Kubernetes is the Pod, if we want to grow, we add pods, NOT containers. | ||
+ | So, how these pods communicate with the outside world. Well, each pod has its own IP. | ||
+ | If you remember, when we were configuring the cluster, we configured a pod network and we even configured Calico network to serve as a pod network and routing. (P.S. Calico is using BGP routing to provide path to the pods, you can see how to configure BGP routing protocol in my Cisco Configuration pages) Either way, so, to summarize: | ||
+ | * InterPod Communication - Via IP from the Pod Network (192.168.0.0 for calico) | ||
+ | * IntraPod Communication - Via Ports and using localhost as a reference to the pod | ||
+ | I know you are confused, so here is a visual reference: | ||
- | =====Lifecycle===== | + | ===InterPod Communication=== |
+ | This is the communication between different pods: | ||
+ | |||
+ | {{: | ||
+ | |||
+ | ===IntraPod Communication=== | ||
+ | This is the communication between different containers within the same pod. Rarely you will see more than one container in a pod, BUT it is possible. | ||
+ | |||
+ | {{: | ||
+ | |||
+ | ====Lifecycle==== | ||
The lifecycle of a pod is the following: | The lifecycle of a pod is the following: | ||
Line 20: | Line 38: | ||
- If it fails in any of the previous state, it is put in: " | - If it fails in any of the previous state, it is put in: " | ||
- | Here is a picture for visual reference: | + | Here is a picture for visual reference |
+ | |||
+ | {{: | ||
+ | |||
+ | =====Deployment===== | ||
+ | So let's get going, our first order of business is creating a manifest file, so let's use the same app which we dockerized on the docker section: | ||
+ | |||
+ | * andonovj/ | ||
+ | |||
+ | |||
+ | ====Manifest file==== | ||
+ | The manifest file can be YML or JSON, up to you, I have went with the YML because I hate myself :) It is very simple, but working. | ||
+ | |||
+ | < | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: date-pod | ||
+ | labels: | ||
+ | zone: prod | ||
+ | version: v1 | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: hello-ctr | ||
+ | image: andonovj/ | ||
+ | ports: | ||
+ | - containerPort: | ||
+ | </ | ||
+ | |||
+ | So what this YML is telling the API server: | ||
+ | - We name our pod: " | ||
+ | - We put it two labels, first that it is a prod and second that it is version 1 | ||
+ | - WE name our container: " | ||
+ | - We use the image from the docker repo: " | ||
+ | - WE define port 8080 for connection. | ||
+ | |||
+ | So let's create it :) | ||
+ | |||
+ | ====Create the Pod==== | ||
+ | The creation of the pod is done using the kubectl and using the YML file (in my case pod.yml) | ||
+ | |||
+ | < | ||
+ | ubuntu@k8s-master: | ||
+ | pod/ | ||
+ | ubuntu@k8s-master: | ||
+ | NAME | ||
+ | date-pod | ||
+ | </ | ||
+ | |||
+ | ===Describe a pod=== | ||
+ | Now, it says that it is ContinerCreating mode, but that is more of explanation of why it is in Pending state than a status. | ||
+ | We can check a more detailed status using again kubectl, but with the describe option: | ||
+ | |||
+ | < | ||
+ | ubuntu@k8s-master: | ||
+ | Name: | ||
+ | Namespace: | ||
+ | Priority: | ||
+ | Node: | ||
+ | Start Time: Fri, 01 May 2020 15:41:40 +0000 | ||
+ | Labels: | ||
+ | zone=prod | ||
+ | Annotations: | ||
+ | Status: | ||
+ | IP: | ||
+ | IPs: | ||
+ | IP: 192.168.247.1 | ||
+ | Containers: | ||
+ | hello-ctr: | ||
+ | Container ID: | ||
+ | Image: | ||
+ | Image ID: | ||
+ | Port: | ||
+ | Host Port: 0/TCP | ||
+ | State: | ||
+ | Started: | ||
+ | Ready: | ||
+ | Restart Count: | ||
+ | Environment: | ||
+ | Mounts: | ||
+ | / | ||
+ | Conditions: | ||
+ | Type Status | ||
+ | Initialized | ||
+ | Ready | ||
+ | ContainersReady | ||
+ | PodScheduled | ||
+ | Volumes: | ||
+ | default-token-jc2sg: | ||
+ | Type: Secret (a volume populated by a Secret) | ||
+ | SecretName: | ||
+ | Optional: | ||
+ | QoS Class: | ||
+ | Node-Selectors: | ||
+ | Tolerations: | ||
+ | | ||
+ | Events: | ||
+ | Type Reason | ||
+ | ---- ------ | ||
+ | Normal | ||
+ | Normal | ||
+ | Normal | ||
+ | Normal | ||
+ | Normal | ||
+ | </ | ||
+ | |||
+ | Congrats, that was your first manual pod deployment. | ||
+ | |||
+ | ====Check the Container=== | ||
+ | So we know that a pod runs one, in some cases more, container(s). | ||
+ | We can check our servers, in my case, it was on the 2nd Worker: | ||
+ | |||
+ | < | ||
+ | root@node-2: | ||
+ | CONTAINER ID IMAGE | ||
+ | 0709a53a0b8b | ||
+ | root@node-2: | ||
+ | </ | ||
+ | |||
+ | So we have 1 requested image running and 1 is running on the 2nd worker. | ||
+ | What if we want to delete a pod. Well very simple: | ||
+ | |||
+ | ====Delete a pod==== | ||
+ | To delete a pod, again use the kubectl command as follows: | ||
+ | |||
+ | < | ||
+ | ubuntu@k8s-master: | ||
+ | NAME | ||
+ | date-pod | ||
+ | ubuntu@k8s-master: | ||
+ | pod " | ||
+ | ubuntu@k8s-master: | ||
+ | </ | ||
- | {{: | + | But what if we want to have more security and load balancing. Let's say, what if we want 5 replica of that pod. Well, that is for the replication controller which we will discsuss in the next section. |