Configure NFS Server for PersistentVolume via DNS or Static IP Cluster

I have a kubernetes cluster running in google container engine that defines a Pod that is running an NFS server that I want to get in other Pods via various PersistentVolume

s.

What is the best way to set up an NFS service if it's on the same cluster?

According to various documentation, iive found that it was not possible to rely on kube-dns for this, since node starting with the Kubernetes module is not configured to use it as its DNS.

So, it doesn't matter (and doesn't really work - Ive tested, with a different name / FQDN) ...

apiVersion: v1
kind: PersistentVolume
metadata:
  name: xxx-persistent-storage
  labels:
    app: xxx
spec:
  capacity:
    storage: 10Gi
  nfs:
    path: "/exports/xxx"
    server: nfs-service.default.svc.cluster.local  # <-- does not work

      

I can start an NFS server and check its ClusterIP through kubectl describe svc nfs-service

and then hard- code its Endpoint-IP to PV (this works):

apiVersion: v1
kind: PersistentVolume
metadata:
  name: xxx-persistent-storage
  labels:
    app: xxx
spec:
  capacity:
    storage: 10Gi
  nfs:
    path: "/exports/xxx"
    server: 10.2.1.7  # <-- does work

      

But this seems to be wrong - as soon as I need to recreate the NFS-Service, I get a new IP and I have to reconfigure all PVs based on that.

  • What's the best practice here? Im surprised that I didn’t find any example for it, because I assumed it was quite a normal thing, right?

  • Is it possible to set some kind of static IP address for the service so that I can rely on always the same IP address for the NFS service?

+3


source to share


1 answer


You are on the right track. To make sure your Service is using a static IP address, simply add clusterIP: 1.2.3.3

to the spec:

Service section .

From the canonical example :



In the future, we may be able to link them together using service names, but for now you need to hard-code the IP.

+2


source







All Articles