Skip to main content

Openshift/K8S: How to create an NFS Persistent Volume Claim for your Containers


 Unlike virtualization, in modern containerization technologies Kubernetes or Openshift doesn't include embedded storage with containers. Altough a standard vm contains embedded storage due to requirement of operation systems, already comes with virtual disks and it's storage space. 

In Kubernetes,  Openshift and other variants, as an Administrator, you need to preconfigure the storage for for your containers/pods. You or your administrator needs to provide your node to persistent storage space(s). This spaces can be traditional block storage (iscsi, fc), file storage (cifs,nfs) or cloud storage (s3,azure files,ebs).


Whatever technology you use, you need a Persistent Volume Claim (PVC) to provide persistent storage space to containers. 

As a first task, I recommend you to create a resource quota for your PVC to limit exhausting storage space. I'm going to tell openshift to limit "the project" or "namespace" to use total 500GB storage space and maximum allowed PVC count to limit 3. 

[ozgurkkisa@workstation ~]$oc create quota storage \
  --hard=requests.storage=500G,persistentvolumeclaims=3
In other words, an Admin or a Developer can create max 3 PvC and total 500 GB space for projects containers. So they would be create space for their containers, for instance pvc1 = 150 GB, pvc2 = 150 GB, pv3 = 200 GB.

At first, create an empty file with .yml extension. I named my file as pvc.yml:

[ozgurkkisa@workstation ~]$ touch pvc.yml
Open file that you created with your favorite  text editor. For example nano or vim. Then write or paste the content below :

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: Apache-www
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs-storage
  resources:
    requests:
      storage: 300Gi

In this example, I created a pvc named Apache-www, storage space as 300 GB and access mode as ReadWriteMany.

After you save your yml file, activate it by typing  :

[student@workstation ~]$ oc apply -f pvc.yml
persistentvolumeclaim/Apache-www created

Tip:If you made a mistake or simply delete the pvc that you create use :

oc delete persistentvolumeclaim Apache-www
persistentvolumeclaim "Apache-www" deleted
After applying change you may check your new PvC to by typing this command:

[student@workstation ~]$ oc get persistentvolumeclaims
NAME         STATUS  VOLUME   CAPACITY  ACCESS MODES  STORAGECLASS  AGE
Apache-www  Bound   pvc-...  300Gi       RWX           nfs-storage  15s
Our PvC is now ready! So we can use it in any container in project or namespace. To defining a PvC for container/pod, you have to put required info in yaml file of container application. For example, let's say we have an Apache application named httpd. We want to put our web site/application files to persistent storage. We have to provide the additional info in provisioning file : 


apiVersion: apps/v1
kind: Deployment
metadata:
   name: httpd
spec:
   replicas: 1
   selector:
      app: httpd
   template:
      metadata:
      labels:
      app: httpd
   spec:
      containers:
      - image: registy.access.redhat.com/rhscl/httpd-24-rhel7
        imagePullPolicy : Always
        name: httpd
        ports:
        - containerport: 8080
          protocol: TCP
        - containerPort: 8443
          protocol: TCP
        VolumeMounts:
        - mountPath: /var/www/html
          name: html
        restartPolicy: Always
        volumes: 
        - name: html
          persistenVolumeClaim:
             Apache-www
Be aware that marked with yellow lines. These are volume spesific data that bind persistent storage to our container/pod. In the sample above, we simply add persistent volume claim named Apache-www and it's related space to containers /var/www/html directory.

You can save the file as apache.yml. After required info added  and saved to file you need to create/redeploy your apache instance by running command : 


[ozgurkkisa@workstation ~]$ oc apply -f apache.yml

After a while, your new/existing pods would be created or redeployed. Wher your pods crashed or stopped, your data would be persisted in the storage. 

Hope to see in new articles and tips soon.

With my best wishes 😀






Comments

Popular posts from this blog

Openshift/K8S : Get routes hostnames and learn their reachability

  Undoubtedly, route have a key role to accessing your pods and their applications inside.  You may want to learn your whole Kubernetes or Openshift routes and their hostnames. Also you want to learn reachability of them. Here comes a shortcut! First step is creating a variable : hosts=$(oc get route -A \ -o jsonpath='{.items[*].spec.host}') You can ensure that just getting the hostnames,  rather than any other info by running : echo $hosts After this point we need a series of shell command to get names and HTTP status of our routes. For this task, write out the codes shown below :  for host in $hosts ; do \ curl https://$host -k -w "%{url_effective} %{http_code}\n" -o /dev/null -s ; \ done https://oauth-openshift.apps.ocp4.ozgurkkisa.com/ 403 https://console-openshift-console.apps.ocp4.ozgurkkisa.com/ 200 https://downloads-openshift-console.apps.ocp4.ozgurkkisa.com/ 200 https://alertmanager-main-openshift-monitoring.apps.ocp4.ozgurkkisa.com/ 403 https://grafa...

K8S/OPENSHIFT: POD CRASHING,SECURITY CONTEXT CONTRAINTS PROBLEM?

  Your new daily given task is creating a new app in a new namespace. You investigated and found your image. Then you started to create your app. After app created you checked and realized that the your new apps pod is not starting! Here look at this example. I Used gitlab here as an instance. [ozgurk@myworkstation ~]$ oc new-app --name gitlab \ > --docker-image quay.io/redhattraining/gitlab-ce:8.4.3-ce.0 --> Creating resources ... imagestream.image.openshift.io "gitlab" created deployment.apps "gitlab" created service "gitlab" created --> Success  Until here, everything looking normal. Check your pod status :  [ozgurk@myworkstation ~]$ oc get pods NAME READY STATUS RESTARTS AGE gitlab-6d61db3479-yyjl 0/1 Error 1 43s As seen above, our pod is in a trouble. It's better to start investigation from pod logs. [ozgurk@myworkstation ~]$ oc logs pod/ gitlab-6d61db3479-yyjl ===========...

Openshift : Display the expiry date of the OpenShift Console TLS certificate

Hello from openshift! I have a question for you. When will expires your Openshift Router TLS (console) certificate? Here is a short solution to learn this info. Let's start! At first, login to your Openshift cluster. Then, create a variable for router tls certificate hostname : console=$(oc get route -n openshift-console console \ > -o jsonpath='{.spec.host}') After defining router hostname variable, check it : echo $console console-openshift-console.apps.ocp4.ozgurkkisa.com   And finally we are gonna learn what is the expiration of  our beloved Router TLS certificate by running curl https://$console -k -v 2>&1 | grep 'expire date' * expire date: Jun 22 16:43:53 2022 GMT Therefore, we have learned our OpenShift Router TLS certificate expiration date. By this you can write a to-do and  prepare yourself to replacing this certificate. Thats it!