Overview
So far we have configured a Replication Controller with application which is running on 10 pods divided on 2 servers (nodes). Apart from the fact that we haven't see the app yet or that it is very unreliable to connect directly to the pod's IP, all we have done so far isn't visible to the outside world :)
Services are addressing 3 major problems:
- How we access the application from inside the Cluster ?
- How we access the application from outside the Cluster ?
- If we need to connect to a pod IP to use the app, what will happen if the Pod dies (remember we don't care about them if they do) ?
Services are creating a VIP (Virtual IP) which will not change, that IP redirects to the Pod network. So if a Pod fails, it won't affect the user as the Service will just update the list of IPs which it has for the Pods.
Let me try to explain it via visual representation:
As you can see, the service doesn't care if a pod dies (because of a node dies or so), it will just delete the failed pod's IP from its list and add the newly created pod's IP.
So let's create a service:
Service Configuration
First thing first, there are 2 ways to create a service:
- Iterative - As addition to already existing Replication Controller
- Declarative - Using YML/JSON File as we did with the Replication Controller and Pod.
Create Service (Iterative)
To create a service, we simply use our trusty: “kubectl” as follows:
Create service
ubuntu@k8s-master:~$ kubectl expose rc hello-rc --name=hello-svc --target-port=1234 --type=NodePort service/hello-svc exposed ubuntu@k8s-master:~$
So from the above statement, what we have done ? We have exposed service with the name: “hello-svc” with port: “1234”.
We can of course check the service as follows:
Check Service
ubuntu@k8s-master:~$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-svc NodePort 10.99.58.101 <none> 1234:32727/TCP 13m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d ubuntu@k8s-master:~$ kubectl describe svc hello-svc Name: hello-svc Namespace: default Labels: app=hello-date Annotations: <none> Selector: app=hello-date Type: NodePort IP: 10.99.58.101 Port: <unset> 1234/TCP TargetPort: 1234/TCP NodePort: <unset> 32727/TCP Endpoints: 192.168.247.10:1234,192.168.247.11:1234,192.168.247.7:1234 + 7 more... Session Affinity: None External Traffic Policy: Cluster Events: <none> ubuntu@k8s-master:~$
From the description you can see that, if we want to access our application, we have to use port: “32727”.
The IP specified here is: 10.99.58.101 is the VIP, to which you have to connect if you have access to that network. If you don't you can use any IP of the API host, in my case (192.168.50.10)
So let's see:
So how to delete a serice once created, again using the kubectl command:
Delete service
ubuntu@k8s-master:~$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-svc NodePort 10.99.58.101 <none> 1234:32727/TCP 35m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d ubuntu@k8s-master:~$ kubectl delete svc hello-svc service "hello-svc" deleted ubuntu@k8s-master:~$
Of course our page won't be able to reload after, but that is because there is a better way to create a service…
Create service (Declarative)
As already mentioned with Pod and Remote Controller, we can simply create a YML file and put our deepest desires there. In our case, I have created a simple YML file to hold that declaration to the API server:
Service YML File
apiVersion: v1 kind: Service metadata: name: hello-svc labels: app: hello-date spec: type: NodePort ports: - port: 1234 nodePort: 30001 protocol: TCP selector: app: hello-date
So what we have here, a service called: “hello-svc”, which:
- Relates to app: hello-date
- Is relying on API version 1
- It is of Type: NodePort
- It map ports: 1234 → 30001 on TCP protocol
Now, there are couple stuff to address here before we continue further. We have discussed the NodePort before, but what exactly is a NodePort. Well that is the service type.
But wait…are there more service types ? Well yes…there are :) But let's discuss 3 of them now:
- ClusterIP - Provides stable internal IP to the cluster. Thus can be used for communication to the app from WITHIN the cluster.
- NodePort - Exposes our app outside the cluster by adding cluster-wise port on top of the Cluster IP.
- LoadBalancer - Integrates the NodePort with Cloud-based load balancer, more for that later :)
So let continue :) To create a service using YML file, we just pass it to the API as follows:
Create service
ubuntu@k8s-master:~$ kubectl create -f svc.yml service/hello-svc created
We can of course check our service as follows:
Create service
ubuntu@k8s-master:~$ kubectl create -f svc.yml service/hello-svc created ubuntu@k8s-master:~$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-svc NodePort 10.110.235.213 <none> 1234:30001/TCP 11s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d ubuntu@k8s-master:~$ kubectl describe svc hello-svc Name: hello-svc Namespace: default Labels: app=hello-date Annotations: <none> Selector: app=hello-date Type: NodePort IP: 10.110.235.213 Port: <unset> 1234/TCP TargetPort: 1234/TCP NodePort: <unset> 30001/TCP Endpoints: 192.168.247.10:1234,192.168.247.11:1234,192.168.247.7:1234 + 7 more... Session Affinity: None External Traffic Policy: Cluster Events: <none>
Voila, we have a service again, however now the port which will be mapped will be 1234 → 30001. Which means we have to connect using port: 1234, but the port which will be linked to our containers is 30001.
Endpoints
Remember that list of pods which a service has. Well that is the Endpoint. As soon as you create a service, you will create an endpoint with the same name, so let's see it.
Check Endpoint
ubuntu@k8s-master:~$ kubectl get ep NAME ENDPOINTS AGE hello-svc 192.168.247.10:1234,192.168.247.11:1234,192.168.247.7:1234 + 7 more... 17m kubernetes 192.168.50.10:6443 11d ubuntu@k8s-master:~$ kubectl describe ep hello-svc Name: hello-svc Namespace: default Labels: app=hello-date Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2020-05-02T13:37:44Z Subsets: Addresses: 192.168.247.10,192.168.247.11,192.168.247.7,192.168.247.8,192.168.247.9,192.168.84.134,192.168.84.135,192.168.84.136,192.168.84.137,192.168.84.138 NotReadyAddresses: <none> Ports: Name Port Protocol ---- ---- -------- <unset> 1234 TCP Events: <none> ubuntu@k8s-master:~$
In case any of these 10 Pods dies, it will get replaced and the new IP will be added here. All this, without any intervention from the Sysadmin and on the fly.