Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
k8s_basic_storage [2020/05/22 15:12] – [Create PVC] andonovj | k8s_basic_storage [2020/05/25 12:49] (current) – [Dynamic Provisioning] andonovj | ||
---|---|---|---|
Line 56: | Line 56: | ||
storageClassName: | storageClassName: | ||
capacity: | capacity: | ||
- | storage: | + | storage: |
persistentVolumeReclaimPolicy: | persistentVolumeReclaimPolicy: | ||
# ^ | # ^ | ||
Line 85: | Line 85: | ||
ubuntu@k8s-master: | ubuntu@k8s-master: | ||
NAME CAPACITY | NAME CAPACITY | ||
- | ps-pv 10Gi RWO Retain | + | ps-pv 50Gi RWO Retain |
ubuntu@k8s-master: | ubuntu@k8s-master: | ||
No resources found in default namespace. | No resources found in default namespace. | ||
Line 109: | Line 109: | ||
resources: | resources: | ||
requests: | requests: | ||
- | storage: | + | storage: |
</ | </ | ||
Line 119: | Line 119: | ||
ubuntu@k8s-master: | ubuntu@k8s-master: | ||
NAME | NAME | ||
- | ps-pvc | + | ps-pvc |
ubuntu@k8s-master: | ubuntu@k8s-master: | ||
NAME CAPACITY | NAME CAPACITY | ||
- | ps-pv 10Gi RWO Retain | + | ps-pv 50Gi RWO Retain |
ubuntu@k8s-master: | ubuntu@k8s-master: | ||
</ | </ | ||
- | We see that, we have bounded them and we see the claim in the Persistent Volume description. | + | We see that, we have bounded them and we see the claim in the Persistent Volume description. |
+ | It is important to note that the PVC will bind to ANY PV which has the SAME or MORE of the requested storage. For example if the PVC wants to claim 50 GBs and we have PV with 20 GBs, it will bind. | ||
+ | |||
+ | However, if our PVC wants to claim 20GBs and we have only 50 GB PV, then we are screwed and the PVC won't bind. | ||
Congrats, we have a pvc to present to any pod we want to have storage. | Congrats, we have a pvc to present to any pod we want to have storage. | ||
+ | So let's create the Pod | ||
+ | |||
+ | ====Create a POD with Persistent Storage==== | ||
+ | As always, here example of the POD YML: | ||
+ | |||
+ | < | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: first-pod | ||
+ | spec: | ||
+ | volumes: | ||
+ | - name: fast50g | ||
+ | persistentVolumeClaim: | ||
+ | claimName: ps-pvc | ||
+ | containers: | ||
+ | - name: ctr1 | ||
+ | image: ubuntu: | ||
+ | command: | ||
+ | - /bin/bash | ||
+ | - " | ||
+ | - "sleep 60m" | ||
+ | volumeMounts: | ||
+ | - mountPath: "/ | ||
+ | name: fast50g | ||
+ | </ | ||
+ | |||
+ | Again, I think explaination is useless here as the things are self explaining and as always we can just create: | ||
+ | |||
+ | < | ||
+ | ubuntu@k8s-master: | ||
+ | pod/ | ||
+ | ubuntu@k8s-master: | ||
+ | NAME READY | ||
+ | first-pod | ||
+ | ubuntu@k8s-master: | ||
+ | </ | ||
+ | |||
+ | ====Get on the Pod==== | ||
+ | As we installed that pod on Kubernetes with 1 master and 2 workers. It didn't had to end up on the mater, in fact it ended up on Worker 1 :) So let's check it there. On Worker1, we can list all the Pods as usual and connect to our one | ||
+ | |||
+ | < | ||
+ | root@node-1: | ||
+ | CONTAINER ID IMAGE | ||
+ | ca7f18335d32 | ||
+ | d2e5fddd1d2d | ||
+ | c6694424e858 | ||
+ | 60ea2607bd11 | ||
+ | 3ea2eeed5344 | ||
+ | 4742768bace2 | ||
+ | fd7678069cdd | ||
+ | 598a580a0ab0 | ||
+ | 8cc487d0c45e | ||
+ | a64e7f2c167c | ||
+ | 97da605cd3c7 | ||
+ | e7c8dcebe1be | ||
+ | 71c9f4548392 | ||
+ | 420f51aa6c56 | ||
+ | b9cc5715f753 | ||
+ | 9b86b11b3b61 | ||
+ | root@node-1: | ||
+ | root@first-pod:/# | ||
+ | Filesystem | ||
+ | overlay | ||
+ | tmpfs 64M | ||
+ | tmpfs | ||
+ | / | ||
+ | shm 64M | ||
+ | tmpfs | ||
+ | tmpfs | ||
+ | tmpfs | ||
+ | tmpfs | ||
+ | root@first-pod:/# | ||
+ | root@first-pod:/ | ||
+ | total 8 | ||
+ | drwxr-xr-x 2 root root 4096 May 22 15:23 . | ||
+ | drwxr-xr-x 1 root root 4096 May 22 15:23 .. | ||
+ | root@first-pod:/ | ||
+ | /data | ||
+ | root@first-pod:/ | ||
+ | root@first-pod:/ | ||
+ | total 8 | ||
+ | drwxr-xr-x 1 root root 4096 May 22 15:23 .. | ||
+ | -rw-r--r-- 1 root root 0 May 22 15:29 test | ||
+ | drwxr-xr-x 2 root root 4096 May 22 15:29 . | ||
+ | </ | ||
+ | |||
+ | So we have created a simple text file on the pod, under mount: "/ | ||
+ | |||
+ | < | ||
+ | root@node-1:/ | ||
+ | node-1 | ||
+ | root@node-1:/ | ||
+ | total 8 | ||
+ | drwxr-xr-x 4 ubuntu ubuntu 4096 May 22 15:23 .. | ||
+ | -rw-r--r-- 1 root | ||
+ | drwxr-xr-x 2 root | ||
+ | root@node-1:/ | ||
+ | / | ||
+ | root@node-1:/ | ||
+ | </ | ||
+ | Lo and behold, the simple text file is on persistent storage and won't be affected if the container crashes for example. It will stay there safe and sound on the host server. | ||
+ | |||
+ | As we mentioned, that kind of persistent storage allocation DOESN' | ||
+ | |||
+ | {{ : | ||
+ | |||
+ | You see that, mapping and mapping and mapping :) Let's see what we can do about it. | ||
=====Dynamic Provisioning===== | =====Dynamic Provisioning===== | ||
- | Let's configure Dynamic storage. | + | Let's configure Dynamic storage. |
+ | |||
+ | Now, for Dynamic provisioning with NFS I had to re-configure the cluster. In a nutshell make sure that teh API IP which you give when you initiate the cluster has the same subnet of the pod network. | ||
+ | |||
+ | For example: | ||
+ | |||
+ | < | ||
+ | kubeadm init --ignore-preflight-errors=NumCPU --apiserver-advertise-address=192.168.50.10 --pod-network-cidr=192.168.50.0/ | ||
+ | </ | ||
+ | |||
+ | Calico by default is using: 192.168.0.0/ | ||
+ | |||
+ | So let's get going. In the begining, I had something like that: | ||
+ | |||
+ | < | ||
+ | NAMESPACE | ||
+ | kube-system | ||
+ | kube-system | ||
+ | kube-system | ||
+ | kube-system | ||
+ | kube-system | ||
+ | kube-system | ||
+ | kube-system | ||
+ | kube-system | ||
+ | kube-system | ||
+ | kube-system | ||
+ | kube-system | ||
+ | kube-system | ||
+ | kube-system | ||
+ | ubuntu@k8s-master: | ||
+ | </ | ||
+ | |||
+ | So we have to create the following: | ||
+ | |||
+ | * Deployment | ||
+ | * Service Account | ||
+ | * Service | ||
+ | * RBAC | ||
+ | * Storage Class | ||
+ | * Claim | ||
+ | |||
+ | ====Create the Deployment, | ||
+ | You can see the deployment, | ||
+ | |||
+ | < | ||
+ | apiVersion: v1 | ||
+ | kind: ServiceAccount | ||
+ | metadata: | ||
+ | name: nfs-provisioner | ||
+ | --- | ||
+ | kind: Service | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: nfs-provisioner | ||
+ | labels: | ||
+ | app: nfs-provisioner | ||
+ | spec: | ||
+ | ports: | ||
+ | - name: nfs | ||
+ | port: 2049 | ||
+ | - name: nfs-udp | ||
+ | port: 2049 | ||
+ | protocol: UDP | ||
+ | - name: nlockmgr | ||
+ | port: 32803 | ||
+ | - name: nlockmgr-udp | ||
+ | port: 32803 | ||
+ | protocol: UDP | ||
+ | - name: mountd | ||
+ | port: 20048 | ||
+ | - name: mountd-udp | ||
+ | port: 20048 | ||
+ | protocol: UDP | ||
+ | - name: rquotad | ||
+ | port: 875 | ||
+ | - name: rquotad-udp | ||
+ | port: 875 | ||
+ | protocol: UDP | ||
+ | - name: rpcbind | ||
+ | port: 111 | ||
+ | - name: rpcbind-udp | ||
+ | port: 111 | ||
+ | protocol: UDP | ||
+ | - name: statd | ||
+ | port: 662 | ||
+ | - name: statd-udp | ||
+ | port: 662 | ||
+ | protocol: UDP | ||
+ | selector: | ||
+ | app: nfs-provisioner | ||
+ | --- | ||
+ | kind: Deployment | ||
+ | apiVersion: apps/v1 | ||
+ | metadata: | ||
+ | name: nfs-provisioner | ||
+ | spec: | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: nfs-provisioner | ||
+ | replicas: 1 | ||
+ | strategy: | ||
+ | type: Recreate | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: nfs-provisioner | ||
+ | spec: | ||
+ | serviceAccount: | ||
+ | containers: | ||
+ | - name: nfs-provisioner | ||
+ | image: quay.io/ | ||
+ | ports: | ||
+ | - name: nfs | ||
+ | containerPort: | ||
+ | - name: nfs-udp | ||
+ | containerPort: | ||
+ | protocol: UDP | ||
+ | - name: nlockmgr | ||
+ | containerPort: | ||
+ | - name: nlockmgr-udp | ||
+ | containerPort: | ||
+ | protocol: UDP | ||
+ | - name: mountd | ||
+ | containerPort: | ||
+ | - name: mountd-udp | ||
+ | containerPort: | ||
+ | protocol: UDP | ||
+ | - name: rquotad | ||
+ | containerPort: | ||
+ | - name: rquotad-udp | ||
+ | containerPort: | ||
+ | protocol: UDP | ||
+ | - name: rpcbind | ||
+ | containerPort: | ||
+ | - name: rpcbind-udp | ||
+ | containerPort: | ||
+ | protocol: UDP | ||
+ | - name: statd | ||
+ | containerPort: | ||
+ | - name: statd-udp | ||
+ | containerPort: | ||
+ | protocol: UDP | ||
+ | securityContext: | ||
+ | capabilities: | ||
+ | add: | ||
+ | - DAC_READ_SEARCH | ||
+ | - SYS_RESOURCE | ||
+ | args: | ||
+ | - " | ||
+ | env: | ||
+ | - name: POD_IP | ||
+ | valueFrom: | ||
+ | fieldRef: | ||
+ | fieldPath: status.podIP | ||
+ | - name: SERVICE_NAME | ||
+ | value: nfs-provisioner | ||
+ | - name: POD_NAMESPACE | ||
+ | valueFrom: | ||
+ | fieldRef: | ||
+ | fieldPath: metadata.namespace | ||
+ | imagePullPolicy: | ||
+ | volumeMounts: | ||
+ | - name: export-volume | ||
+ | mountPath: /export | ||
+ | volumes: | ||
+ | - name: export-volume | ||
+ | hostPath: | ||
+ | path: /srv | ||
+ | ubuntu@k8s-master: | ||
+ | serviceaccount/ | ||
+ | service/ | ||
+ | deployment.apps/ | ||
+ | </ | ||
+ | |||
+ | Then let's create the RBAC, which created the Cluster roles and maps them and of course the storage class | ||
+ | |||
+ | ===Create RBAC and Storage Class=== | ||
+ | < | ||
+ | kind: ClusterRole | ||
+ | apiVersion: rbac.authorization.k8s.io/ | ||
+ | metadata: | ||
+ | name: nfs-provisioner-runner | ||
+ | rules: | ||
+ | - apiGroups: ["" | ||
+ | resources: [" | ||
+ | verbs: [" | ||
+ | - apiGroups: ["" | ||
+ | resources: [" | ||
+ | verbs: [" | ||
+ | - apiGroups: [" | ||
+ | resources: [" | ||
+ | verbs: [" | ||
+ | - apiGroups: ["" | ||
+ | resources: [" | ||
+ | verbs: [" | ||
+ | - apiGroups: ["" | ||
+ | resources: [" | ||
+ | verbs: [" | ||
+ | - apiGroups: [" | ||
+ | resources: [" | ||
+ | resourceNames: | ||
+ | verbs: [" | ||
+ | --- | ||
+ | kind: ClusterRoleBinding | ||
+ | apiVersion: rbac.authorization.k8s.io/ | ||
+ | metadata: | ||
+ | name: run-nfs-provisioner | ||
+ | subjects: | ||
+ | - kind: ServiceAccount | ||
+ | name: nfs-provisioner | ||
+ | # replace with namespace where provisioner is deployed | ||
+ | namespace: default | ||
+ | roleRef: | ||
+ | kind: ClusterRole | ||
+ | name: nfs-provisioner-runner | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | --- | ||
+ | kind: Role | ||
+ | apiVersion: rbac.authorization.k8s.io/ | ||
+ | metadata: | ||
+ | name: leader-locking-nfs-provisioner | ||
+ | rules: | ||
+ | - apiGroups: ["" | ||
+ | resources: [" | ||
+ | verbs: [" | ||
+ | --- | ||
+ | kind: RoleBinding | ||
+ | apiVersion: rbac.authorization.k8s.io/ | ||
+ | metadata: | ||
+ | name: leader-locking-nfs-provisioner | ||
+ | subjects: | ||
+ | - kind: ServiceAccount | ||
+ | name: nfs-provisioner | ||
+ | # replace with namespace where provisioner is deployed | ||
+ | namespace: default | ||
+ | roleRef: | ||
+ | kind: Role | ||
+ | name: leader-locking-nfs-provisioner | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | ubuntu@k8s-master: | ||
+ | kind: StorageClass | ||
+ | apiVersion: storage.k8s.io/ | ||
+ | metadata: | ||
+ | name: example-nfs | ||
+ | provisioner: | ||
+ | mountOptions: | ||
+ | - vers=4.1 | ||
+ | ubuntu@k8s-master: | ||
+ | clusterrole.rbac.authorization.k8s.io/ | ||
+ | clusterrolebinding.rbac.authorization.k8s.io/ | ||
+ | role.rbac.authorization.k8s.io/ | ||
+ | rolebinding.rbac.authorization.k8s.io/ | ||
+ | ubuntu@k8s-master: | ||
+ | storageclass.storage.k8s.io/ | ||
+ | </ | ||
+ | |||
+ | ====Create Storage Claim==== | ||
+ | With Dynamic provisioning, | ||
+ | |||
+ | < | ||
+ | ubuntu@k8s-master: | ||
+ | kind: PersistentVolumeClaim | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: nfs | ||
+ | annotations: | ||
+ | volume.beta.kubernetes.io/ | ||
+ | spec: | ||
+ | accessModes: | ||
+ | - ReadWriteMany | ||
+ | resources: | ||
+ | requests: | ||
+ | storage: 10Mi | ||
+ | ubuntu@k8s-master: | ||
+ | </ | ||
+ | |||
+ | ====Verify==== | ||
+ | We can verify the configuration as follows: | ||
+ | |||
+ | |||
+ | < | ||
+ | ubuntu@k8s-master: | ||
+ | NAME CAPACITY | ||
+ | persistentvolume/ | ||
+ | |||
+ | NAME STATUS | ||
+ | persistentvolumeclaim/ | ||
+ | ubuntu@k8s-master: | ||
+ | </ | ||
+ | |||
+ | Finally, we have a bounded PVC using Dynamic Provision. There is one very good git with all this files: | ||
+ | |||
+ | < | ||
+ | ubuntu@k8s-master: | ||
+ | ubuntu@k8s-master: | ||
+ | ubuntu@k8s-master: | ||
+ | total 32 | ||
+ | -rw-r--r-- | ||
+ | -rw-r--r-- | ||
+ | -rw-r--r-- | ||
+ | drwxr-xr-x | ||
+ | drwx------ | ||
+ | -rw-r--r-- | ||
+ | drwxrwxr-x | ||
+ | drwxr-xr-x | ||
+ | drwxrwxr-x 17 ubuntu ubuntu 4096 May 25 11:27 external-storage | ||
+ | </ | ||
+ | |||
+ | ====Create a Pod with Dynamic Provisioning==== | ||
+ | We can of course create a pod which will be using the NFS, let's create NGINX pod for example: | ||
+ | |||
+ | < | ||
+ | apiVersion: apps/v1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: nginx | ||
+ | name: nfs-nginx | ||
+ | spec: | ||
+ | replicas: 1 | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: nginx | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: nginx | ||
+ | spec: | ||
+ | volumes: | ||
+ | - name: nfs # | ||
+ | persistentVolumeClaim: | ||
+ | claimName: nfs # same name of pvc that was created | ||
+ | containers: | ||
+ | - image: nginx | ||
+ | name: nginx | ||
+ | volumeMounts: | ||
+ | - name: nfs # name of volume should match claimName volume | ||
+ | mountPath: mydata2 # mount inside of contianer | ||
+ | ubuntu@k8s-master: | ||
+ | deployment.apps/ | ||
+ | ubuntu@k8s-master: | ||
+ | NAME READY | ||
+ | nfs-nginx-6b4db6f57-4mczr | ||
+ | nfs-provisioner-7795cf6f4-d7m2l | ||
+ | ubuntu@k8s-master: | ||
+ | </ | ||
+ | |||
+ | Even an ubuntu pod: | ||
+ | < | ||
+ | ubuntu@k8s-master: | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: first-pod | ||
+ | spec: | ||
+ | volumes: | ||
+ | - name: fast10m | ||
+ | persistentVolumeClaim: | ||
+ | claimName: nfs | ||
+ | containers: | ||
+ | - name: ctr1 | ||
+ | image: ubuntu: | ||
+ | command: | ||
+ | - /bin/bash | ||
+ | - " | ||
+ | - "sleep 60m" | ||
+ | volumeMounts: | ||
+ | - mountPath: "/ | ||
+ | name: fast10m | ||
+ | ubuntu@k8s-master: | ||
+ | NAME READY | ||
+ | first-pod | ||
+ | nfs-nginx-6b4db6f57-4mczr | ||
+ | nfs-provisioner-7795cf6f4-d7m2l | ||
+ | ubuntu@k8s-master: | ||
+ | </ | ||
+ | |||
+ | Eurika, finally we are done with both types of privisioning |