Kubernetes: how to change the accessModes of an autoscaled package to ReadOnlyMany?

I'm trying to use HPA: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/

PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: api-orientdb-pv
  labels:
    app: api-orientdb
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  gcePersistentDisk:
    pdName: api-orientdb-{{ .Values.cluster.name | default "testing" }}
    fsType: ext4

      

PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: api-orientdb-pv-claim
  labels:
    app: api
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      app: api-orientdb
  storageClassName: ""

      

HPA:

Name:                           api-orientdb-deployment
Namespace:                      default
Labels:                         <none>
Annotations:                        <none>
CreationTimestamp:                  Thu, 08 Jun 2017 10:37:06 +0700
Reference:                      Deployment/api-orientdb-deployment
Metrics:                        ( current / target )
  resource cpu on pods  (as a percentage of request):   17% (8m) / 10%
Min replicas:                       1
Max replicas:                       2
Events:                         <none>

      

and a new pod was created:

NAME                                       READY     STATUS    RESTARTS   AGE
api-orientdb-deployment-2506639415-n8nbt   1/1       Running   0          7h
api-orientdb-deployment-2506639415-x8nvm   1/1       Running   0          6h

      

As you can see, I am using gcePersistentDisk

which does not support access mode ReadWriteMany

.

The newly created module will also set the volume as rw

mode:

Name:        api-orientdb-deployment-2506639415-x8nvm
Containers:
    Mounts:
      /orientdb/databases from api-orientdb-persistent-storage (rw)
Volumes:
  api-orientdb-persistent-storage:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  api-orientdb-pv-claim
    ReadOnly:   false

      

Question: How does it work in this case? Is there a way to mostly configure pod ( n8nbt

) to use PV with access mode ReadWriteOnce

and all other scaled items ( x8nvm

) should be ReadOnlyMany

? How to do it automatically?

The only way I can think of is to create another PVC to mount the same drive, but with a different one accessModes

, but then the question is, how do I configure the recently modified module to use this PVC?


Fri Jun 9 11:29:34 ICT 2017

I found something: There is nothing to guarantee that the newly modified block will run on the same node as the first block. So, if the volume plugin doesn't support ReadWriteMany

, and the scaled block is run on another node, it won't be able to mount:

Failed to mount volume "api-orientdb-pv" on node "gke-testing-default-pool-7711f782-4p6f" with: googleapi: Error 400: Disk resource "projects / xx / zones / us-central1-a / disks / api-orientdb-testing "already in use" projects / xx / zones / we-central1-a / cases / gke-testing default pool-7711f782-h7xv

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes

Attention! The volume can only be edited with one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce with a single node or ReadOnlyMany by many nodes, but not at the same time.

If so, the only way to make HPA work is is ReadWriteMany

the access mode to be supported by the volume plugin?


Fri Jun 9 14:28:30 ICT 2017

If you want only one Pod to write, then create two Deployments. One with replicas: 1 and another that is attached to the autoscaler (and has readOnly: true in it)

OK.

Note that GCE PD can only be installed with one node if any of the Pods access it readWrite.

Then I have to use a shortcut selector to ensure that all objects are in the same node, right?

Your question is not clear to me

Let me explain: in the case of autoscaling, assuming that with the shortcut selection labels, I can guarantee that the newly scaled block ends at the same node, but since the volume is set like rw

, it ruins the GCE PD since we have 2 pods , set the volume how rw

?

First of all, if you have Deployment with replicas: 1 you won't have 2 Pods running at the same time (most of the time!)

I know.

On the other hand, if the PVC specifies ReadWriteOnce, then after the first Pod is scheduled, any other Pods will have to be scheduled on the same node, or not scheduled at all (most common case: not enough resources on Node)

This is not the case with HPA. Please see the above updates for more details.

If for any reason you have 2 Pods accessing the same readWrite file then it exits the application completely, which is what happens and is not a specific Kubernete

The main thing that confused me:

ReadWriteOnce - volume can be set as read and write by one node

OK, node, not pod. But in the case of autoscaling, if two modules are running on the same node and both install the volume as rw

, does GCE PD support it? If so, how does it work?

+3


source to share


1 answer


He works as intended. Times in ReadWriteOnce

refers to the number of nodes that can use PVC, not the number of subnets (HPA or non-HPA).



If you want only one Pod to write, then create two Deployments. One with replicas: 1

and the other with an autoscaler attached (and has readOnly: true

in it). Note that GCE PD can only be installed with one node if any of the Pods access it readWrite.

+1


source







All Articles