k8s_basic_storage

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
k8s_basic_storage [2020/05/22 15:36] andonovjk8s_basic_storage [2020/05/25 12:49] (current) – [Dynamic Provisioning] andonovj
Line 138: Line 138:
  
 <Code:bash|Pod Specs> <Code:bash|Pod Specs>
 +apiVersion: v1
 kind: Pod kind: Pod
 metadata: metadata:
Line 232: Line 233:
 root@node-1:/home/ubuntu/volume1# root@node-1:/home/ubuntu/volume1#
 </Code> </Code>
-Lo and behold, the simple text file is on persistent storage and won't be affected by the container anymore :)+Lo and behold, the simple text file is on persistent storage and won't be affected if the container crashes for example. It will stay there safe and sound on the host server.
  
 As we mentioned, that kind of persistent storage allocation DOESN'T SCALE and let's see why: As we mentioned, that kind of persistent storage allocation DOESN'T SCALE and let's see why:
  
-{{ :kubernetes_storage_ps_yml_map.jpg?600 |}}+{{ :kubernetes_storage_ps_yml_map.jpg?800 |}}
  
 You see that, mapping and mapping and mapping :) Let's see what we can do about it. You see that, mapping and mapping and mapping :) Let's see what we can do about it.
  
 =====Dynamic Provisioning===== =====Dynamic Provisioning=====
-Let's configure Dynamic storage.+Let's configure Dynamic storage. The idea here is that, we (as administrators) care only of the PVC, NOT the PV. So we create the PVC and the provisioner creates teh PV himself. 
 + 
 +Now, for Dynamic provisioning with NFS I had to re-configure the cluster. In a nutshell make sure that teh API IP which you give when you initiate the cluster has the same subnet of the pod network.  
 + 
 +For example: 
 + 
 +<Code:bash|Initiate Cluster for NFS> 
 +kubeadm init --ignore-preflight-errors=NumCPU --apiserver-advertise-address=192.168.50.10 --pod-network-cidr=192.168.50.0/24 
 +</Code> 
 + 
 +Calico by default is using: 192.168.0.0/16 so I modified it to: 192.168.50.0/24 so it will match the network of the API Advertise IP. 
 + 
 +So let's get going. In the begining, I had something like that: 
 + 
 +<Code:bash|Overview> 
 +NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP                NODE         NOMINATED NODE   READINESS GATES 
 +kube-system   calico-kube-controllers-77c5fc8d7f-fcnh7   1/    Running            3m56s   192.168.235.195   k8s-master   <none>           <none> 
 +kube-system   calico-node-94qkt                          1/1     Running            66s     192.168.50.12     node-2       <none>           <none> 
 +kube-system   calico-node-j54sq                          1/1     Running            2m18s   192.168.50.11     node-1       <none>           <none> 
 +kube-system   calico-node-rc4t6                          1/1     Running            3m56s   192.168.50.10     k8s-master   <none>           <none> 
 +kube-system   coredns-66bff467f8-d7hr5                   1/    Running            10m     192.168.235.193   k8s-master   <none>           <none> 
 +kube-system   coredns-66bff467f8-jmwk7                   1/    Running            10m     192.168.235.194   k8s-master   <none>           <none> 
 +kube-system   etcd-k8s-master                            1/1     Running            10m     192.168.50.10     k8s-master   <none>           <none> 
 +kube-system   kube-apiserver-k8s-master                  1/1     Running            10m     192.168.50.10     k8s-master   <none>           <none> 
 +kube-system   kube-controller-manager-k8s-master         1/    Running            10m     192.168.50.10     k8s-master   <none>           <none> 
 +kube-system   kube-proxy-8td28                           1/    Running            66s     192.168.50.12     node-2       <none>           <none> 
 +kube-system   kube-proxy-bljr8                           1/    Running            10m     192.168.50.10     k8s-master   <none>           <none> 
 +kube-system   kube-proxy-dcnqt                           1/    Running            2m18s   192.168.50.11     node-1       <none>           <none> 
 +kube-system   kube-scheduler-k8s-master                  1/1     Running            10m     192.168.50.10     k8s-master   <none>           <none> 
 +ubuntu@k8s-master:~$ 
 +</Code> 
 + 
 +So we have to create the following: 
 + 
 +  * Deployment 
 +  * Service Account 
 +  * Service 
 +  * RBAC 
 +  * Storage Class 
 +  * Claim 
 + 
 +====Create the Deployment,Service and Service Account==== 
 +You can see the deployment,service and service account' YML below: 
 + 
 +<Code:bash|Components YML> 
 +apiVersion: v1 
 +kind: ServiceAccount 
 +metadata: 
 +  name: nfs-provisioner 
 +--- 
 +kind: Service 
 +apiVersion: v1 
 +metadata: 
 +  name: nfs-provisioner 
 +  labels: 
 +    app: nfs-provisioner 
 +spec: 
 +  ports: 
 +    - name: nfs 
 +      port: 2049 
 +    - name: nfs-udp 
 +      port: 2049 
 +      protocol: UDP 
 +    - name: nlockmgr 
 +      port: 32803 
 +    - name: nlockmgr-udp 
 +      port: 32803 
 +      protocol: UDP 
 +    - name: mountd 
 +      port: 20048 
 +    - name: mountd-udp 
 +      port: 20048 
 +      protocol: UDP 
 +    - name: rquotad 
 +      port: 875 
 +    - name: rquotad-udp 
 +      port: 875 
 +      protocol: UDP 
 +    - name: rpcbind 
 +      port: 111 
 +    - name: rpcbind-udp 
 +      port: 111 
 +      protocol: UDP 
 +    - name: statd 
 +      port: 662 
 +    - name: statd-udp 
 +      port: 662 
 +      protocol: UDP 
 +  selector: 
 +    app: nfs-provisioner 
 +--- 
 +kind: Deployment 
 +apiVersion: apps/v1 
 +metadata: 
 +  name: nfs-provisioner 
 +spec: 
 +  selector: 
 +    matchLabels: 
 +      app: nfs-provisioner 
 +  replicas: 1 
 +  strategy: 
 +    type: Recreate 
 +  template: 
 +    metadata: 
 +      labels: 
 +        app: nfs-provisioner 
 +    spec: 
 +      serviceAccount: nfs-provisioner 
 +      containers: 
 +        - name: nfs-provisioner 
 +          image: quay.io/kubernetes_incubator/nfs-provisioner:latest 
 +          ports: 
 +            - name: nfs 
 +              containerPort: 2049 
 +            - name: nfs-udp 
 +              containerPort: 2049 
 +              protocol: UDP 
 +            - name: nlockmgr 
 +              containerPort: 32803 
 +            - name: nlockmgr-udp 
 +              containerPort: 32803 
 +              protocol: UDP 
 +            - name: mountd 
 +              containerPort: 20048 
 +            - name: mountd-udp 
 +              containerPort: 20048 
 +              protocol: UDP 
 +            - name: rquotad 
 +              containerPort: 875 
 +            - name: rquotad-udp 
 +              containerPort: 875 
 +              protocol: UDP 
 +            - name: rpcbind 
 +              containerPort: 111 
 +            - name: rpcbind-udp 
 +              containerPort: 111 
 +              protocol: UDP 
 +            - name: statd 
 +              containerPort: 662 
 +            - name: statd-udp 
 +              containerPort: 662 
 +              protocol: UDP 
 +          securityContext: 
 +            capabilities: 
 +              add: 
 +                - DAC_READ_SEARCH 
 +                - SYS_RESOURCE 
 +          args: 
 +            - "-provisioner=example.com/nfs" 
 +          env: 
 +            - name: POD_IP 
 +              valueFrom: 
 +                fieldRef: 
 +                  fieldPath: status.podIP 
 +            - name: SERVICE_NAME 
 +              value: nfs-provisioner 
 +            - name: POD_NAMESPACE 
 +              valueFrom: 
 +                fieldRef: 
 +                  fieldPath: metadata.namespace 
 +          imagePullPolicy: "IfNotPresent" 
 +          volumeMounts: 
 +            - name: export-volume 
 +              mountPath: /export 
 +      volumes: 
 +        - name: export-volume 
 +          hostPath: 
 +            path: /srv 
 +ubuntu@k8s-master:~/external-storage/nfs/deploy/kubernetes$ kubectl create -f deployment.yaml 
 +serviceaccount/nfs-provisioner created 
 +service/nfs-provisioner created 
 +deployment.apps/nfs-provisioner created 
 +</Code> 
 + 
 +Then let's create the RBAC, which created the Cluster roles and maps them and of course the storage class 
 + 
 +===Create RBAC and Storage Class=== 
 +<Code:bash|Create RBAC & Storage Class> 
 +kind: ClusterRole 
 +apiVersion: rbac.authorization.k8s.io/v1 
 +metadata: 
 +  name: nfs-provisioner-runner 
 +rules: 
 +  - apiGroups: [""
 +    resources: ["persistentvolumes"
 +    verbs: ["get", "list", "watch", "create", "delete"
 +  - apiGroups: [""
 +    resources: ["persistentvolumeclaims"
 +    verbs: ["get", "list", "watch", "update"
 +  - apiGroups: ["storage.k8s.io"
 +    resources: ["storageclasses"
 +    verbs: ["get", "list", "watch"
 +  - apiGroups: [""
 +    resources: ["events"
 +    verbs: ["create", "update", "patch"
 +  - apiGroups: [""
 +    resources: ["services", "endpoints"
 +    verbs: ["get"
 +  - apiGroups: ["extensions"
 +    resources: ["podsecuritypolicies"
 +    resourceNames: ["nfs-provisioner"
 +    verbs: ["use"
 +--- 
 +kind: ClusterRoleBinding 
 +apiVersion: rbac.authorization.k8s.io/v1 
 +metadata: 
 +  name: run-nfs-provisioner 
 +subjects: 
 +  - kind: ServiceAccount 
 +    name: nfs-provisioner 
 +     # replace with namespace where provisioner is deployed 
 +    namespace: default 
 +roleRef: 
 +  kind: ClusterRole 
 +  name: nfs-provisioner-runner 
 +  apiGroup: rbac.authorization.k8s.io 
 +--- 
 +kind: Role 
 +apiVersion: rbac.authorization.k8s.io/v1 
 +metadata: 
 +  name: leader-locking-nfs-provisioner 
 +rules: 
 +  - apiGroups: [""
 +    resources: ["endpoints"
 +    verbs: ["get", "list", "watch", "create", "update", "patch"
 +--- 
 +kind: RoleBinding 
 +apiVersion: rbac.authorization.k8s.io/v1 
 +metadata: 
 +  name: leader-locking-nfs-provisioner 
 +subjects: 
 +  - kind: ServiceAccount 
 +    name: nfs-provisioner 
 +    # replace with namespace where provisioner is deployed 
 +    namespace: default 
 +roleRef: 
 +  kind: Role 
 +  name: leader-locking-nfs-provisioner 
 +  apiGroup: rbac.authorization.k8s.io 
 +ubuntu@k8s-master:~/external-storage/nfs/deploy/kubernetes$ cat class.yaml 
 +kind: StorageClass 
 +apiVersion: storage.k8s.io/v1 
 +metadata: 
 +  name: example-nfs 
 +provisioner: example.com/nfs 
 +mountOptions: 
 +  - vers=4.1 
 +ubuntu@k8s-master:~/external-storage/nfs/deploy/kubernetes$ kubectl create -f rbac.yaml 
 +clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created 
 +clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created 
 +role.rbac.authorization.k8s.io/leader-locking-nfs-provisioner created 
 +rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-provisioner created 
 +ubuntu@k8s-master:~/external-storage/nfs/deploy/kubernetes$ kubectl create -f class.yaml 
 +storageclass.storage.k8s.io/example-nfs created 
 +</Code> 
 + 
 +====Create Storage Claim==== 
 +With Dynamic provisioning, we DON'T Create the Volume, we create ONLY the claim, the volume is created automatically by the provision, that is the MAIN difference 
 + 
 +<Code:bash|Create Claim> 
 +ubuntu@k8s-master:~/external-storage/nfs/deploy/kubernetes$ cat claim.yaml 
 +kind: PersistentVolumeClaim 
 +apiVersion: v1 
 +metadata: 
 +  name: nfs 
 +  annotations: 
 +    volume.beta.kubernetes.io/storage-class: "example-nfs" 
 +spec: 
 +  accessModes: 
 +    - ReadWriteMany 
 +  resources: 
 +    requests: 
 +      storage: 10Mi 
 +ubuntu@k8s-master:~/ 
 +</Code> 
 + 
 +====Verify==== 
 +We can verify the configuration as follows: 
 + 
 + 
 +<Code:bash|Create Claim> 
 +ubuntu@k8s-master:~/external-storage/nfs/deploy/kubernetes$ kubectl get pv,pvc 
 +NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM         STORAGECLASS   REASON   AGE 
 +persistentvolume/pvc-9a8aa090-7c73-4e64-94eb-dcc7805828dd   10Mi       RWX            Delete           Bound    default/nfs   example-nfs             21m 
 + 
 +NAME                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE 
 +persistentvolumeclaim/nfs   Bound    pvc-9a8aa090-7c73-4e64-94eb-dcc7805828dd   10Mi       RWX            example-nfs    21m 
 +ubuntu@k8s-master:~/external-storage/nfs/deploy/kubernetes$ 
 +</Code> 
 + 
 +Finally, we have a bounded PVC using Dynamic Provision. There is one very good git with all this files: 
 + 
 +<Code:bash|Configure it with GIT> 
 +ubuntu@k8s-master:~$ git clone https://github.com/kubernetes-incubator/external-storage.git^C 
 +ubuntu@k8s-master:~$ 
 +ubuntu@k8s-master:~$ ls -lart 
 +total 32 
 +-rw-r--r--  1 ubuntu ubuntu 3771 Aug 31  2015 .bashrc 
 +-rw-r--r--  1 ubuntu ubuntu  220 Aug 31  2015 .bash_logout 
 +-rw-r--r--  1 ubuntu ubuntu  655 Jul 12  2019 .profile 
 +drwxr-xr-x  4 root   root   4096 May 25 10:51 .. 
 +drwx------  2 ubuntu ubuntu 4096 May 25 10:51 .ssh 
 +-rw-r--r--  1 ubuntu ubuntu    0 May 25 11:21 .sudo_as_admin_successful 
 +drwxrwxr-x  4 ubuntu ubuntu 4096 May 25 11:22 .kube 
 +drwxr-xr-x  5 ubuntu ubuntu 4096 May 25 11:27 . 
 +drwxrwxr-x 17 ubuntu ubuntu 4096 May 25 11:27 external-storage                         <---- This one 
 +</Code> 
 + 
 +====Create a Pod with Dynamic Provisioning==== 
 +We can of course create a pod which will be using the NFS, let's create NGINX pod for example: 
 + 
 +<Code:bash|Create NGINX Pod> 
 +apiVersion: apps/v1 
 +kind: Deployment 
 +metadata: 
 +  labels: 
 +    app: nginx 
 +  name: nfs-nginx 
 +spec: 
 +  replicas: 1 
 +  selector: 
 +    matchLabels: 
 +      app: nginx 
 +  template: 
 +    metadata: 
 +      labels: 
 +        app: nginx 
 +    spec: 
 +      volumes: 
 +      - name: nfs # 
 +        persistentVolumeClaim: 
 +          claimName: nfs  # same name of pvc that was created 
 +      containers: 
 +      - image: nginx 
 +        name: nginx 
 +        volumeMounts: 
 +        - name: nfs # name of volume should match claimName volume 
 +          mountPath: mydata2 # mount inside of contianer 
 +ubuntu@k8s-master:~/external-storage/nfs/deploy/kubernetes$ kubectl create -f nginx.yml 
 +deployment.apps/nfs-nginx created 
 +ubuntu@k8s-master:~/external-storage/nfs/deploy/kubernetes$ kubectl get pods -o wide 
 +NAME                              READY   STATUS    RESTARTS   AGE     IP              NODE     NOMINATED NODE   READINESS GATES 
 +nfs-nginx-6b4db6f57-4mczr         1/    Running            2m51s   192.168.247.2   node-2   <none>           <none> 
 +nfs-provisioner-7795cf6f4-d7m2l   1/    Running            67m     192.168.247.1   node-2   <none>           <none> 
 +ubuntu@k8s-master:~/external-storage/nfs/deploy/kubernetes$ 
 +</Code> 
 + 
 +Even an ubuntu pod: 
 +<Code:bash|ubuntu pod> 
 +ubuntu@k8s-master:~$ cat pod.yml 
 +apiVersion: v1 
 +kind: Pod 
 +metadata: 
 +  name: first-pod 
 +spec: 
 +  volumes: 
 +    - name: fast10m 
 +      persistentVolumeClaim: 
 +        claimName: nfs 
 +  containers: 
 +    - name: ctr1 
 +      image: ubuntu:latest 
 +      command: 
 +      - /bin/bash 
 +      - "-c" 
 +      - "sleep 60m" 
 +      volumeMounts: 
 +      - mountPath: "/data" 
 +        name: fast10m 
 +ubuntu@k8s-master:~$ kubectl get pods -o wide 
 +NAME                              READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES 
 +first-pod                         1/    Running            24s     192.168.84.131   node-1   <none>           <none> 
 +nfs-nginx-6b4db6f57-4mczr         1/    Running            4m16s   192.168.247.2    node-2   <none>           <none> 
 +nfs-provisioner-7795cf6f4-d7m2l   1/    Running            69m     192.168.247.1    node-2   <none>           <none> 
 +ubuntu@k8s-master:~$ 
 +</Code> 
 + 
 +Eurika, finally we are done with both types of privisioning
  • k8s_basic_storage.1590161812.txt.gz
  • Last modified: 2020/05/22 15:36
  • by andonovj