Where to store data in a Kubernete cluster

How do modules managed by the replication controller and "hidden" behind a service in Kubernetes write / read data? If I have an application that receives images from a user that needs to be saved, where do I store it? Due to the service in front, I cannot control which node it is stored in if I use volumes.

+3


source to share


3 answers


I think the "simple" answer to your question is that you will need shared storage in your Kubernetes cluster in order for each pod to access the same data. Then it doesn't matter where the containers are running and which module is actually executing the service.

Maybe another solution would be Flocker , they describe themselves briefly:

Flocker is a data volume manager and multi-tier Docker cluster management tool. With it, you can manage your data using the same tools you use for your stateless applications using the power of ZFS on Linux.



Anyway, I think the question of storage on Kubernete or any other docked infrastructure is very interesting.

It looks like google-engine-app doesn't support sharing datastores between their apps by default as they pointed out in this SO question

+4


source


If you are running Google Compute Engine, you can use the Compute Engine persistent drive (network storage) associated with the pod and navigate with the pod:

https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk



We would like to support other types of network attached storage (iSCSI, NFS, etc.), but we have not yet had the opportunity to create them. We invite you to participate!;)

+3


source


GKE allows you to create disks for data storage, and this storage area can be associated with multiple modules. Please note that your cluster and disk must be in the same zone / region.

gcloud compute disks create disk1 --zone=zone_name

      

Now you can use this disk to store data from modules. This is a simple mongodb replication controller yaml file that uses disk1. Perhaps this is not an efficient way, but the simplest one known to me.

apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: mongo
  name: mongo-controller
spec:

 replicas: 1
  template:
    metadata:
      labels:
        name: mongo
    spec:
      containers:
      - image: mongo
       name: mongo
        ports:
        - name: mongo
          containerPort: 27017
          hostPort: 27017
        volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
      volumes:
        - name: mongo-persistent-storage
          gcePersistentDisk:
            pdName: disk1
            fsType: ext4

      

disk 1 will still exist even if your module is removed or replaced.

0


source







All Articles