This is an old revision of the document!
Overview
Deployments, finally we have reached to the last major object in Kubernetes. Deployments take everything we have being doing so far and encapusaltes it. What I mean is the following:
Of course you can have more than 1 pod in replica set. Yes, it is replica set when under Deployment, not replication controller. But the purpose is the same.
So let's create our Deployment now:
Configuration
To create a deployment again we have two ways:
- Iterative - With kubectl create and so on
- Declarative - Using YML / JSON file, I will be using this one.
As I am bored with the iterative way, let's clean our env and create a deployment. I won't delete my service as I will use it later.
Create Deployment (Declarative)
First, let's create our YML file:
Create Deployment YML file
apiVersion: apps/v1 kind: Deployment metadata: name: hello-deploy spec: replicas: 10 selector: matchLabels: app: hello-date template: metadata: labels: app: hello-date spec: containers: - name: hello-pod image: andonovj/httpserverdemo:latest ports: - containerPort: 1234
Bare in mind that the deployment have been moved from extensions/v1beta1 → apps/v1 and forward :) Also a little syntax changes compared to the old one, but nothing serious.
Once we have the YAML file, we can continue as follows:
Create deployment
ubuntu@k8s-master:~$ kubectl create -f deploy.yml deployment.apps/hello-deploy created ubuntu@k8s-master:~$ kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE hello-deploy 1/10 10 1 26s ubuntu@k8s-master:~$ kubectl describe deploy Name: hello-deploy Namespace: default CreationTimestamp: Sat, 02 May 2020 15:54:34 +0000 Labels: <none> Annotations: deployment.kubernetes.io/revision: 1 Selector: app=hello-date Replicas: 10 desired | 10 updated | 10 total | 10 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=hello-date Containers: hello-pod: Image: andonovj/httpserverdemo:latest Port: 1234/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: hello-deploy-6cd458494 (10/10 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 8m59s deployment-controller Scaled up replica set hello-deploy-6cd458494 to 10
Again, we can check if our app is working:
Replica Sets
As already mentioned, deployments operate with Replicat Sets, NOT replication Controllers. Eventhough that is a new objects, it inherits a lot of stuff from the replication controller. You can check the Replica sets, almost the same as the replication controller and notice that we don't have Replication Controllers anymore…after I have deleted them in the past of course.
Check Replica Sets
ubuntu@k8s-master:~$ kubectl get rs NAME DESIRED CURRENT READY AGE hello-deploy-6cd458494 10 10 10 11m ubuntu@k8s-master:~$ kubectl get rc No resources found in default namespace. ubuntu@k8s-master:~$ kubectl describe rs Name: hello-deploy-6cd458494 Namespace: default Selector: app=hello-date,pod-template-hash=6cd458494 Labels: app=hello-date pod-template-hash=6cd458494 Annotations: deployment.kubernetes.io/desired-replicas: 10 deployment.kubernetes.io/max-replicas: 13 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/hello-deploy Replicas: 10 current / 10 desired Pods Status: 10 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=hello-date pod-template-hash=6cd458494 Containers: hello-pod: Image: andonovj/httpserverdemo:latest Port: 1234/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-mjtw9 Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-m9bgx Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-44jvs Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-6v8lt Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-zwh4l Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-jf594 Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-vd7mt Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-nq2wx Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-f8j5f Normal SuccessfulCreate 16m replicaset-controller (combined from similar events): Created pod: hello-deploy-6cd458494-dxqh7 ubuntu@k8s-master:~$ kubectl get rc No resources found in default namespace. <- We don't have Replication Controllers anymore.
Rolling-Updates
Now, we have deployed application, which is totally scalable and redundancy is provided as Hell :) But what if want to fix couple bugs, or present new version.
What we do then ? Well, let's edit our application, let's add “v2” into our awesome app:
Edit Software
Edit Source
namespace HttpServerDemo { using System; using System.Net; using System.Net.Sockets; using System.Text; using System.Threading.Tasks; using System.IO; using System.Threading; class Program { public async static Task Main(string[] args) { TcpListener tcpListener = new TcpListener(IPAddress.Any, 1234); tcpListener.Start(); while (true) { TcpClient tcpClient = await tcpListener.AcceptTcpClientAsync(); await ProcessClientAsync(tcpClient); } } public static async Task ProcessClientAsync(TcpClient tcpClient) { const string NewLine = "\r\n"; using (var networkStream = tcpClient.GetStream()) { byte[] requestBytes = new byte[1000000]; //TODO USE Buffer int bytesRead = await networkStream.ReadAsync(requestBytes, 0, requestBytes.Length); string request = Encoding.UTF8.GetString(requestBytes, 0, bytesRead); string responseText = @"<h1>Working... v2</h1>" + <- Updated this line with "v2" :) $"<form> <h1> Time is: {System.DateTime.Now} </h1> </form>"; string response = "HTTP/1.1 200 OK" + NewLine + "Server: SoftuniServer/1.0 " + NewLine + "Content-Type: text/html" + NewLine + "Content-Length: " + responseText.Length + NewLine + NewLine + responseText; byte[] responseBytes = Encoding.UTF8.GetBytes(response); await networkStream.WriteAsync(responseBytes, 0, responseBytes.Length); Console.WriteLine(request); Console.WriteLine(new string('=', 60)); } } } }
You see that “v2” after “Working”, yes, I just added it now. Let's rebuild via docker, upload it and edit our edition to v2 :) The Rebuild part you can see in the Docker section, but I will put the output here as well:
Dockerize, Tag & Push
Dockerize, Tag & Push
[root@postgresqlmaster httpserverdemo]# ls -lart total 16 -rw-r--r--. 1 root root 178 Jan 15 11:29 HttpServerDemo.csproj -rw-r--r--. 1 root root 405 Jan 24 07:20 Dockerfile -rw-r--r--. 1 root root 1904 May 2 12:19 Program.cs dr-xr-x---. 7 root root 4096 May 2 12:39 .. drwxr-xr-x. 2 root root 71 May 2 12:45 . [root@postgresqlmaster httpserverdemo]# docker build -t httpserverdemo . Sending build context to Docker daemon 5.632kB Step 1/10 : FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env ---> 4aa6a74611ff Step 2/10 : WORKDIR /app ---> Using cache ---> b7f1f954c7e2 Step 3/10 : COPY *.csproj ./ ---> Using cache ---> eb10b9d797fa Step 4/10 : RUN dotnet restore ---> Using cache ---> 7db569180b05 Step 5/10 : COPY . ./ ---> c1d5d3f3ebb4 Removing intermediate container 56e2e79130b2 Step 6/10 : RUN dotnet publish -c Release -o out ---> Running in 8d0859d775dd Microsoft (R) Build Engine version 16.5.0+d4cbfca49 for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Restore completed in 49.81 ms for /app/HttpServerDemo.csproj. HttpServerDemo -> /app/bin/Release/netcoreapp3.1/HttpServerDemo.dll HttpServerDemo -> /app/out/ ---> b70ac7b2e6d8 Removing intermediate container 8d0859d775dd Step 7/10 : FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 2.2: Pulling from dotnet/core/aspnet 804555ee0376: Already exists 970251047358: Pull complete f3d4c41a4fd1: Pull complete bd391c46585f: Pull complete Digest: sha256:08277d629af9d5324b63420a650cd96f86e73c4cfdcef6ea3c45912e7578956d Status: Downloaded newer image for mcr.microsoft.com/dotnet/core/aspnet:2.2 ---> e7e3b238011c Step 8/10 : WORKDIR /app ---> ceb374fc780a Removing intermediate container d77bfe0f6d8f Step 9/10 : COPY --from=build-env /app/out . ---> ee079c0d9469 Removing intermediate container 9bb39a98ec96 Step 10/10 : ENTRYPOINT dotnet HttpServerDemo.dll ---> Running in 99cf13dd6c77 ---> 452b7f6a095c Removing intermediate container 99cf13dd6c77 Successfully built 452b7f6a095c Successfully tagged httpserverdemo:latest [root@postgresqlmaster httpserverdemo]# [root@postgresqlmaster httpserverdemo]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE httpserverdemo latest 452b7f6a095c About a minute ago 261MB [root@postgresqlmaster httpserverdemo]# docker tag 452b7f6a095c andonovj/httpserverdemo:v2 [root@postgresqlmaster httpserverdemo]# docker push andonovj/httpserverdemo:v2 The push refers to a repository [docker.io/andonovj/httpserverdemo] 9411dc505491: Pushed e595e200408f: Pushed 579a8f1d6a12: Pushed 15e45d99c926: Pushed 0cf75cb98eb2: Pushed 814c70fdae62: Pushed v2: digest: sha256:f83ed7e653ec409ba00e3710391608d124aac397b1abf6dab5c0482447137cdf size: 1579 [root@postgresqlmaster httpserverdemo]#
aaaaand….we are done :) We have edited our awesome software and uploaded it to docker hub.
Let's now update the containers in the pods :)
Perform the Update
For that purpose, we have to edit the YAML file with couple more values like:
- Strategy: RollingUpdate(default)
- maxUnavailable: 1 - Never have more than 1 unavailable
- maxSurge: 1 - Never have more than 1 additional. In our case, never have more than 11 pods
So, our YAML file should look as follows:
Updated YAML file
apiVersion: apps/v1 kind: Deployment metadata: name: hello-deploy spec: replicas: 10 minReadySeconds: 10 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 selector: matchLabels: app: hello-date template: metadata: labels: app: hello-date spec: containers: - name: hello-pod image: andonovj/httpserverdemo:v2 ports: - containerPort: 1234