In my last blog we configured static persistent volumes. This required that the storage (nfs share) for the application had be prepared in advance before the persistent volume ("pv") could be used by the persistent volume claim ("pvc") .
With a dynamic persistent volume claim, this it not necessary.When a pvc makes a claim to storage class, space on the pv will automatically claimed. In setup below I will explain how-to configure this on our kubernetes cluster with our Synology nas providing the nfs storage.
The following actions need to be executed at the k8s-master via ssh (putty).
- Install Git
sudo apt install git
After git is installed, enter the command below to clone nfs-subdir-exteranl-provisioner repository
git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git
This will clone all the content of the nfs-subdir-external-provisioner repository to your home folder on the k8s-master.
You now should have nfs-subdir-external-provisioner subfolder in your homedirectory. Now need to adjust
# Make a new directory in your home folder cd $home mkdir nfs-client-provisioner # Copy the required file from the github source cp nfs-subdir-external-provisioner/deploy/deployment-arm.yaml nfs-client-provisioner/deployment-arm.yaml cp nfs-subdir-external-provisioner/deploy/class.yaml nfs-client-provisioner/class.yaml cp nfs-subdir-external-provisioner/deploy/rbac.yaml nfs-client-provisioner/rbac.yamlNow you need to adjust the file similar to the settings below. In the example below the kubedata nfs share has already been configured on the nas or nfs server. See my previous blog on how to configure this on your synology nas.
nano nfs-client-provisioner/deployment-arm.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner # < change namespace to nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-dynamic-storage # < new provsioner name
- name: NFS_SERVER
value: xxx.xxx.xxx.xxx # < IP address of your NAS server
- name: NFS_PATH
value: /volum1/kubedata # < example value of your nfs share
volumes:
- name: nfs-client-root
nfs:
server: xxx.xxx.xxx.xxx # < IP address of your NAS server
path: /volum1/kubedata # < example value of your nfs share
nano nfs-client-provisioner/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner # < change namespace to nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner # < change namespace to nfs-client-provisioner
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner # < change namespace to nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner # < change namespace to nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner # < change namespace to nfs-client-provisioner
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
nano nfs-client-provisioner/class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: nfs-dynamic-storage # < new provsioner name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
kubectl create namespace nfs-client-provisioner
kubectl apply -f nfs-client-provisioner/class.yamlk
kubectl apply -f nfs-client-provisioner/rbac.yaml
kubectl apply -f nfs-client-provisioner/deployment-arm.yaml
kubectl get all -n nfs-client-provisioner
nano pvc-nfs-kubedata-nginx-1-dynamic.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-kubedata-nginx-1
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" # < the dynamic nfs storage call we have created earlier
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
kubectl apply -f pvc-nfs-kubedata-nginx-1-dynamic.yaml
<!DOCTYPE html>
<html>
<head>
<style>
</style>
</head>
<body>
<h1>Kubernetes - Webtest 1</h1>
<p>This page is located on a dynamic persistent volume, and run on a k8s-cluster!</p>
</body>
</html>
nano deploy-nginx-1-k8s.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-1
labels:
app: nginx-1
spec:
replicas: 1
selector:
matchLabels:
app: nginx-1
template:
metadata:
labels:
app: nginx-1
spec:
volumes:
- name: nginx-1-volume
persistentVolumeClaim:
claimName: pvc-nfs-kubedata-nginx-1
containers:
- image: nginx
name: nginx-1
imagePullPolicy: Always
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
volumeMounts:
- name: nginx-1-volume
mountPath: /usr/share/nginx/html
---
kind: Service
apiVersion: v1
metadata:
name: nginx-1-service
spec:
selector:
app: nginx-1
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
type: LoadBalancer
---
kubectl apply -f deploy-nginx-1-k8s.yaml
kubectl get service -n default
erikdebont@k8s-master:~$ kubectl get service -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46d
nginx-1-service LoadBalancer 10.103.42.209 xxx.xxx.xxx.xxx 80:31520/TCP 18m
erikdebont@k8s-master:~$
quay.io/external_storage/nfs-client-provisioner-arm:latest
ReplyDeletecan be replaced with
k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 now as its a cross arch image and works with arm64
Thanks for the head sup. I have tested it, and updated the blogpost.
Delete