This guide covers how to add an NFS StorageClass and a dynamic provisioner to Kubernetes using the nfs-subdir-external-provisioner
Helm chart. This enables us to mount NFS shares dynamically for PersistentVolumeClaims (PVCs) used by workloads.
Example use cases:
- Database migrations
- Apache Kafka clusters
- Data processing pipelines
Requirements:
- An accessible NFS share exported with:
rw,sync,no_subtree_check,no_root_squash
- NFSv3 or NFSv4 protocol
- Kubernetes v1.31.7+ or RKE2 with rke2r1 or later
lets get to it
1. NFS Server Export Setup
Ensure your NFS server exports the shared directory correctly:
# /etc/exports
/rke-pv-storage worker-node-ips(rw,sync,no_subtree_check,no_root_squash)
- Replace
worker-node-ips
with actual IPs or CIDR blocks of your worker nodes. - Run
sudo exportfs -r
to reload the export table.
2. Install NFS Subdir External Provisioner
Add the Helm repo and install the provisioned:
helm repo add nfs-subdir-external-provisioner \
https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-client-provisioner \
nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--namespace kube-system \
--set nfs.server=192.168.162.100 \
--set nfs.path=/rke-pv-storage \
--set storageClass.name=nfs-client \
--set storageClass.defaultClass=false
Notes:
- If you want this to be the default storage class, change
storageClass.defaultClass=true
. nfs.server
should point to the IP of your NFS server.nfs.path
must be a valid exported directory from that NFS server.storageClass.name
can be referenced in your PersistentVolumeClaim YAMLs using storageClassName:nfs-client.
3. PVC and Pod Test
Create a test PVC and pod using the following YAML:
# test-nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-nfs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-client
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: test-nfs-pod
spec:
containers:
- name: shell
image: busybox
command: [ "sh", "-c", "sleep 3600" ]
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: test-nfs-pvc
Apply it:
kubectl apply -f test-nfs-pvc.yaml
kubectl get pvc test-nfs-pvc -w
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-nfs-pvc Bound pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 1Gi RWX nfs-client 30s
4. Troubleshooting
If the PVC remains in Pending
, follow these steps:
Check the provisioner pod status:
kubectl get pods -n kube-system | grep nfs-client-provisioner
Inspect the provisioner pod:
kubectl describe pod -n kube-system <pod-name>
kubectl logs -n kube-system <pod-name>
Common Issues:
- Broken State: Bad NFS mount
mount.nfs: access denied by server while mounting 192.168.162.100:/pl-elt-kakfka
- This usually means the NFS path is misspelled or not exported properly.
- Broken State: root_squash enabled
failed to provision volume with StorageClass "nfs-client": unable to create directory to provision new pv: mkdir /persistentvolumes/…: permission denied
- Fix by changing the export to use
no_root_squash
or chown the directory tonobody:nogroup
.
- Fix by changing the export to use
- ImagePullBackOff
- Ensure nodes have internet access and can reach
registry.k8s.io
.
- Ensure nodes have internet access and can reach
- RBAC errors
- Make sure the ServiceAccount used by the provisioner has permissions to watch PVCs and create PVs.
5. Healthy State Example
kubectl get pods -n kube-system | grep nfs-client-provisioner-nfs-subdir-external-provisioner
nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m 1/1 Running 0 3m39s
kubectl describe pod -n kube-system nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m
# Output shows pod is Running with Ready=True
kubectl logs -n kube-system nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m
...
I0512 21:46:03.752701 1 controller.go:1420] provision "default/test-nfs-pvc" class "nfs-client": volume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d" provisioned
I0512 21:46:03.752763 1 volume_store.go:212] Trying to save persistentvolume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d"
I0512 21:46:03.772301 1 volume_store.go:219] persistentvolume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d" saved
I0512 21:46:03.772353 1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Name:"test-nfs-pvc"}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-73481f45-3055-4b4b-80f4-e68ffe83802d
...
Once test-nfs-pvc
is bound and the pod starts successfully, your setup is working. You can now safely use storageClass: nfs-client
in other workloads (e.g., Strimzi KafkaNodePool).