Docker Compose Expose Spark Worker UI ports are dynamically allocated when scaling
I have Apache Spark running inside a container created with Docker Compose. When I create a worker, I specify which port on the host machine (my laptop) is mapped 8081
to the worker's UI web port . With 1 container, this works fine because I can bind 8081:8081
, and the Spark Master container web UI binds correctly to localhost:8081
.
The problem is that when I scale the number of workers using docker-compose scale worker=3
, I cannot specify the port on the host machine in mine docker-compose.yml
, because the scaling will conflict.
I tried using dynamic port mapping, but that causes the host port redirected to working port 8081 to be something like 32XXX, but the Spark Master WebUI links still assume the working WebUI is on port 8081 which means that none of the links work.
Is there a way to scale my containers without port collisions.
The relevant part of mine docker-compose.yml
:
worker:
image: gettyimages/spark
command: bin/spark-class org.apache.spark.deploy.worker.Worker
spark://master:7077
hostname: worker
environment:
SPARK_CONF_DIR: /conf
SPARK_WORKER_CORES: 2
SPARK_WORKER_MEMORY: 1g
SPARK_WORKER_PORT: 8881
SPARK_WORKER_WEBUI_PORT: 8081
SPARK_PUBLIC_DNS: localhost
links:
- master
- cassandra
- kafka
expose:
- 7012
- 7013
- 7014
- 7015
- 7016
- 8881
ports:
- 8081 # dynamic port binding. ends up being 32XXX:8081
# - 8081:8081 only works with 1 container
Is there something that could be done with a replace SPARK_PUBLIC_DNS
or dynamic variable in the link file?
source to share
No one has answered this question yet
Check out similar questions: