Running socket.io in google container engine with multiple containers not working
I am trying to run socket.io application using Google Container Engine. I have installed a login service that creates a google load balancer that points to the cluster. If I have one block in the cluster everything works well. As soon as I add more I get tons of socket.io errors. It looks like the connections end up in different containers in the cluster and I suspect this is a problem with all socket.io polls and updates.
I am setting up a load balancer to use IP based sticky sessions.
Does this mean that it will have an affinity for a specific NODE in a Kubernetes cluster and not a POD?
How can I configure it to ensure that the session is close to a specific POD in the cluster?
NOTE. I am manually setting sessionAffinity to a cloud load balancer.
This will be my yaml input
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-static-ip
spec:
backend:
serviceName: my-service
servicePort: 80
Service
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: myApp
spec:
sessionAffinity: ClientIP
type: NodePort
ports:
- port: 80
targetPort: http-port
selector:
app: myApp
source to share
No one has answered this question yet
Check out similar questions: