Overview
Deployments, finally we have reached to the last major object in Kubernetes. Deployments take everything we have being doing so far and encapusaltes it. What I mean is the following:
Of course you can have more than 1 pod in replica set. Yes, it is replica set when under Deployment, not replication controller. But the purpose is the same.
So let's create our Deployment now:
Configuration
To create a deployment again we have two ways:
- Iterative - With kubectl create and so on
- Declarative - Using YML / JSON file, I will be using this one.
As I am bored with the iterative way, let's clean our env and create a deployment. I won't delete my service as I will use it later.
Create Deployment (Declarative)
First, let's create our YML file:
Create Deployment YML file
apiVersion: apps/v1 kind: Deployment metadata: name: hello-deploy spec: replicas: 10 selector: matchLabels: app: hello-date template: metadata: labels: app: hello-date spec: containers: - name: hello-pod image: andonovj/httpserverdemo:latest ports: - containerPort: 1234
Bare in mind that the deployment have been moved from extensions/v1beta1 → apps/v1 and forward :) Also a little syntax changes compared to the old one, but nothing serious.
Once we have the YAML file, we can continue as follows:
Create deployment
ubuntu@k8s-master:~$ kubectl create -f deploy.yml deployment.apps/hello-deploy created ubuntu@k8s-master:~$ kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE hello-deploy 1/10 10 1 26s ubuntu@k8s-master:~$ kubectl describe deploy Name: hello-deploy Namespace: default CreationTimestamp: Sat, 02 May 2020 15:54:34 +0000 Labels: <none> Annotations: deployment.kubernetes.io/revision: 1 Selector: app=hello-date Replicas: 10 desired | 10 updated | 10 total | 10 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=hello-date Containers: hello-pod: Image: andonovj/httpserverdemo:latest Port: 1234/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: hello-deploy-6cd458494 (10/10 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 8m59s deployment-controller Scaled up replica set hello-deploy-6cd458494 to 10
Again, we can check if our app is working:
Replica Sets
As already mentioned, deployments operate with Replicat Sets, NOT replication Controllers. Eventhough that is a new objects, it inherits a lot of stuff from the replication controller. You can check the Replica sets, almost the same as the replication controller and notice that we don't have Replication Controllers anymore…after I have deleted them in the past of course.
Check Replica Sets
ubuntu@k8s-master:~$ kubectl get rs NAME DESIRED CURRENT READY AGE hello-deploy-6cd458494 10 10 10 11m ubuntu@k8s-master:~$ kubectl get rc No resources found in default namespace. ubuntu@k8s-master:~$ kubectl describe rs Name: hello-deploy-6cd458494 Namespace: default Selector: app=hello-date,pod-template-hash=6cd458494 Labels: app=hello-date pod-template-hash=6cd458494 Annotations: deployment.kubernetes.io/desired-replicas: 10 deployment.kubernetes.io/max-replicas: 13 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/hello-deploy Replicas: 10 current / 10 desired Pods Status: 10 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=hello-date pod-template-hash=6cd458494 Containers: hello-pod: Image: andonovj/httpserverdemo:latest Port: 1234/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-mjtw9 Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-m9bgx Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-44jvs Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-6v8lt Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-zwh4l Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-jf594 Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-vd7mt Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-nq2wx Normal SuccessfulCreate 16m replicaset-controller Created pod: hello-deploy-6cd458494-f8j5f Normal SuccessfulCreate 16m replicaset-controller (combined from similar events): Created pod: hello-deploy-6cd458494-dxqh7 ubuntu@k8s-master:~$ kubectl get rc No resources found in default namespace. <- We don't have Replication Controllers anymore.
Rolling-Updates
Now, we have deployed application, which is totally scalable and redundancy is provided as Hell :) But what if want to fix couple bugs, or present new version.
What we do then ? Well, let's edit our application, let's add “v2” into our awesome app:
Edit Software
Edit Source
namespace HttpServerDemo { using System; using System.Net; using System.Net.Sockets; using System.Text; using System.Threading.Tasks; using System.IO; using System.Threading; class Program { public async static Task Main(string[] args) { TcpListener tcpListener = new TcpListener(IPAddress.Any, 1234); tcpListener.Start(); while (true) { TcpClient tcpClient = await tcpListener.AcceptTcpClientAsync(); await ProcessClientAsync(tcpClient); } } public static async Task ProcessClientAsync(TcpClient tcpClient) { const string NewLine = "\r\n"; using (var networkStream = tcpClient.GetStream()) { byte[] requestBytes = new byte[1000000]; //TODO USE Buffer int bytesRead = await networkStream.ReadAsync(requestBytes, 0, requestBytes.Length); string request = Encoding.UTF8.GetString(requestBytes, 0, bytesRead); string responseText = @"<h1>Working... v2</h1>" + <- Updated this line with "v2" :) $"<form> <h1> Time is: {System.DateTime.Now} </h1> </form>"; string response = "HTTP/1.1 200 OK" + NewLine + "Server: SoftuniServer/1.0 " + NewLine + "Content-Type: text/html" + NewLine + "Content-Length: " + responseText.Length + NewLine + NewLine + responseText; byte[] responseBytes = Encoding.UTF8.GetBytes(response); await networkStream.WriteAsync(responseBytes, 0, responseBytes.Length); Console.WriteLine(request); Console.WriteLine(new string('=', 60)); } } } }
You see that “v2” after “Working”, yes, I just added it now. Let's rebuild via docker, upload it and edit our edition to v2 :) The Rebuild part you can see in the Docker section, but I will put the output here as well:
Dockerize, Tag & Push
Dockerize, Tag & Push
[root@postgresqlmaster httpserverdemo]# ls -lart total 16 -rw-r--r--. 1 root root 178 Jan 15 11:29 HttpServerDemo.csproj -rw-r--r--. 1 root root 405 Jan 24 07:20 Dockerfile -rw-r--r--. 1 root root 1904 May 2 12:19 Program.cs dr-xr-x---. 7 root root 4096 May 2 12:39 .. drwxr-xr-x. 2 root root 71 May 2 12:45 . root@k8s-master:/home/vagrant/HttpServerDemo# docker build -t httpserverdemo . Sending build context to Docker daemon 6.144kB Step 1/11 : FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env ---> 4aa6a74611ff Step 2/11 : WORKDIR /app ---> Using cache ---> e6fb8470c359 Step 3/11 : COPY *.csproj ./ ---> Using cache ---> c6e7e3257ccd Step 4/11 : RUN dotnet restore ---> Using cache ---> 073b0e6dcfac Step 5/11 : COPY . ./ ---> dbb416239305 Step 6/11 : RUN dotnet publish -c Release -o out ---> Running in cdccacf739ec Microsoft (R) Build Engine version 16.5.0+d4cbfca49 for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Restore completed in 38.37 ms for /app/HttpServerDemo.csproj. HttpServerDemo -> /app/bin/Release/netcoreapp3.1/HttpServerDemo.dll HttpServerDemo -> /app/out/ Removing intermediate container cdccacf739ec ---> 29d5a30972d4 Step 7/11 : FROM mcr.microsoft.com/dotnet/core/sdk:3.1 ---> 4aa6a74611ff Step 8/11 : WORKDIR /app ---> Using cache ---> e6fb8470c359 Step 9/11 : COPY --from=build-env /app/out . ---> ff034e158a2e Step 10/11 : RUN find -type d -name bin -prune -exec rm -rf {} \; && find -type d -name obj -prune -exec rm -rf {} \; ---> Running in e217c60056a6 Removing intermediate container e217c60056a6 ---> f3755f21e57e Step 11/11 : ENTRYPOINT ["dotnet", "HttpServerDemo.dll"] ---> Running in e49283dde769 Removing intermediate container e49283dde769 ---> 9f2f48860257 Successfully built 9f2f48860257
aaaaand….we are done :) We have edited our awesome software and uploaded it to docker hub.
Let's now update the containers in the pods :)
Perform the Update
For that purpose, we have to edit the YAML file with couple more values like:
- Strategy: RollingUpdate(default)
- maxUnavailable: 1 - Never have more than 1 unavailable
- maxSurge: 1 - Never have more than 1 additional. In our case, never have more than 11 pods
So, our YAML file should look as follows:
Updated YAML file
apiVersion: apps/v1 kind: Deployment metadata: name: hello-deploy spec: replicas: 10 minReadySeconds: 10 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 selector: matchLabels: app: hello-date template: metadata: labels: app: hello-date spec: containers: - name: hello-pod image: andonovj/httpserverdemo:v2 ports: - containerPort: 1234
Once we have the file, you can start the process as so:
Rolling Update Status
ubuntu@k8s-master:~$ kubectl apply -f deploy.yml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply deployment.apps/hello-deploy configured
You can see the progress of the Rolling Update as follows:
Rolling Update Status
ubuntu@k8s-master:~$ kubectl rollout status deploy hello-deploy Waiting for deployment "hello-deploy" rollout to finish: 2 out of 10 new replicas have been updated... ^Cubuntu@k8s-master:~$ kubectl get pods NAME READY STATUS RESTARTS AGE hello-deploy-6cd458494-6mvt7 1/1 Running 0 80s hello-deploy-6cd458494-9bl7c 1/1 Running 0 80s hello-deploy-6cd458494-fvwvz 1/1 Running 0 80s hello-deploy-6cd458494-grp2w 1/1 Running 0 80s hello-deploy-6cd458494-ldsxq 1/1 Running 0 80s hello-deploy-6cd458494-lpgdj 1/1 Running 0 80s hello-deploy-6cd458494-mjsmh 1/1 Running 0 80s hello-deploy-6cd458494-rd58j 1/1 Running 0 80s hello-deploy-6cd458494-rrkhq 1/1 Running 0 80s hello-deploy-7f44bd8b96-k92sr 0/1 ContainerCreating 0 23s hello-deploy-7f44bd8b96-lqmdx 0/1 ContainerCreating 0 23s ubuntu@k8s-master:~$ kubectl rollout status deploy hello-deploy Waiting for deployment "hello-deploy" rollout to finish: 4 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 4 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "hello-deploy" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "hello-deploy" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "hello-deploy" rollout to finish: 1 old replicas are pending termination...
We can also check the version which is currently being used:
Rolling Update Status
ubuntu@k8s-master:~$ kubectl get deploy hello-deploy NAME READY UP-TO-DATE AVAILABLE AGE hello-deploy 10/10 10 10 5m7s ubuntu@k8s-master:~$ kubectl describe deploy hello-deploy Name: hello-deploy Namespace: default CreationTimestamp: Sun, 03 May 2020 11:28:33 +0000 Labels: <none> Annotations: deployment.kubernetes.io/revision: 2 Selector: app=hello-date Replicas: 10 desired | 10 updated | 10 total | 10 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 10 RollingUpdateStrategy: 1 max unavailable, 1 max surge Pod Template: Labels: app=hello-date Containers: hello-pod: Image: andonovj/httpserverdemo:edge <- We are using the "edge" version, which we have just compiled. Port: 1234/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: hello-deploy-7f44bd8b96 (10/10 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 5m11s deployment-controller Scaled up replica set hello-deploy-6cd458494 to 10 Normal ScalingReplicaSet 4m14s deployment-controller Scaled up replica set hello-deploy-7f44bd8b96 to 1 Normal ScalingReplicaSet 4m14s deployment-controller Scaled down replica set hello-deploy-6cd458494 to 9 Normal ScalingReplicaSet 4m14s deployment-controller Scaled up replica set hello-deploy-7f44bd8b96 to 2 Normal ScalingReplicaSet 2m18s deployment-controller Scaled down replica set hello-deploy-6cd458494 to 7 Normal ScalingReplicaSet 2m18s deployment-controller Scaled up replica set hello-deploy-7f44bd8b96 to 4 Normal ScalingReplicaSet 2m3s deployment-controller Scaled down replica set hello-deploy-6cd458494 to 5 Normal ScalingReplicaSet 2m3s deployment-controller Scaled up replica set hello-deploy-7f44bd8b96 to 6 Normal ScalingReplicaSet 108s deployment-controller Scaled down replica set hello-deploy-6cd458494 to 3 Normal ScalingReplicaSet 82s (x4 over 108s) deployment-controller (combined from similar events): Scaled down replica set hello-deploy-6cd458494 to 0
Rolling Back
Now, before we continue, let's speak a little bit about Change History. We can check the history of all changes as follows:
Check Change History
ubuntu@k8s-master:~$ kubectl rollout history deploy hello-deploy deployment.apps/hello-deploy REVISION CHANGE-CAUSE 1 <none> 2 <none>
The reason why you don't see anything in the “change-cause” is because we didn't use “-record” when we applied the change. But let's reverse our changes and apply them again. We can reverse our changes using the following command:
Rollback
ubuntu@k8s-master:~$ kubectl rollout undo deployment hello-deploy --to-revision=1 deployment.apps/hello-deploy rolled back ubuntu@k8s-master:~$
Of course we can monitor it with the same command as before:
Rollback
ubuntu@k8s-master:~$ kubectl rollout status deploy hello-deploy Waiting for deployment "hello-deploy" rollout to finish: 4 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 4 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated... Waiting for deployment "hello-deploy" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "hello-deploy" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "hello-deploy" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "hello-deploy" rollout to finish: 1 old replicas are pending termination... deployment "hello-deploy" successfully rolled out ubuntu@k8s-master:~$ kubectl describe deploy Name: hello-deploy Namespace: default CreationTimestamp: Sun, 03 May 2020 11:28:33 +0000 Labels: <none> Annotations: deployment.kubernetes.io/revision: 3 Selector: app=hello-date Replicas: 10 desired | 10 updated | 10 total | 10 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 10 RollingUpdateStrategy: 1 max unavailable, 1 max surge Pod Template: Labels: app=hello-date Containers: hello-pod: Image: andonovj/httpserverdemo:latest <- We are back to the old deployment. Port: 1234/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: hello-deploy-6cd458494 (10/10 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 45m deployment-controller Scaled up replica set hello-deploy-6cd458494 to 10 Normal ScalingReplicaSet 44m deployment-controller Scaled up replica set hello-deploy-7f44bd8b96 to 1 Normal ScalingReplicaSet 44m deployment-controller Scaled down replica set hello-deploy-6cd458494 to 9
Rollupdate with Record
Let's again apply the RollUpdate WITH the record this time:
Rollupdate with Record
ubuntu@k8s-master:~$ kubectl apply -f deploy.yml --record deployment.apps/hello-deploy configured ubuntu@k8s-master:~$ ubuntu@k8s-master:~$ kubectl rollout history deploy hello-deploy deployment.apps/hello-deploy REVISION CHANGE-CAUSE 3 <none> 4 kubectl apply --filename=deploy.yml --record=true
Now, we can see that, the record is there and we can see the history of changes.