Connect docker-compose to external database

I have a set of 4 containers that need to talk to each other and two of them need to connect to an external database.

I started working with a composer and put everything together.

The containers can talk to each other without too much trouble, however, they cannot connect to an external database.

The external DB is running and I can easily connect to it through the shell.

The docker-compose file looks like this:

version: "3"

services:  

  bridge:
    # version => 2.1.4
    build: ./lora-gateway-bridge
    ports:
      - "1680/udp:1700/udp"
    links:
      - emqtt
      - redis
    environment:
      - MQTT_SERVER=tcp://emqtt:1883
    networks:
      - external
    restart: unless-stopped

  loraserver:
    # version => 0.16.1
    build: ./loraserver
    links:
      - redis
      - emqtt
      - lora-app-server
    environment:
      - NET_ID=010203
      - REDIS_URL=redis://redis:6379
      - DB_AUTOMIGRATE=true
      - POSTGRES_DSN=${SQL_STRING} ###<- connection string
      - BAND=EU_863_870
    ports:
      - "8000:8000"
    restart: unless-stopped

  lora-app-server:
    build: ./lora-app-server 
    # version => 0.8.0
    links:
      - emqtt
      - redis
    volumes:
      - "/opt/lora-app-server/certs:/opt/lora-app-server/certs"
    environment:
      - POSTGRES_DSN=${SQL_STRING} ### <- connection string
      - REDIS_URL=redis://redis:6379
      - NS_SERVER=loraserver:8000
      - MQTT_SERVER=tcp://emqtt:1883
    ports:
      - "8001:8001"
      - "443:8080"
    restart: unless-stopped

  redis:
    image: redis:3.0.7-alpine
    restart: unless-stopped

  emqtt:
    image: erlio/docker-vernemq:latest
    volumes:
      - ./emqttd/usernames/vmq.passwd:/etc/vernemq/vmq.passwd
    ports:
      - "1883:1883"
      - "18083:18083"
    restart: unless-stopped

      

It looks like they can't find the host the database is running on.

All the example I see is talking about the database inside the docker compose, but I am not quite clear on how to connect the container to an external service.

+3


source to share


1 answer


From your code, I can see that you need to connect to an external PostgreSQL server.

Networks

The ability to discover a resource on a network is related to the network in which the network is used.

There is a set of network types that can be used, making it easy to configure, and there is also the ability to create your own networks and add containers to them.

You have several types to choose from, the top has the most insulation:

  • closed containers = you only have loopbacks inside the container, but no interaction with the container's virtual network and neither with the host network
  • bridged containers = your containers are connected via the default bridge network, which is finally connected to the host's network.
  • pooled containers = your container network is the same and there is no isolation () at this level, also has a host network connection
  • open containers = full access to the host network

The default type is bridged , so you will have all containers using the same default bridged network.

In docker-compose.yml

you can select the network type from network_mode

Since you haven't defined any network or changed network_mode

, you can use the default - bridge

.

This means that your containers will join the default bridge network and each container will have access to each other and to the host network.

Therefore your problem is not network related. And you have to check if PostgreSQL is available for remote connections. For example, you can access PostgreSQL from localhost by default, but you need to configure any other rules for remote connection access.

You can set up your PostgreSQL instance by following this answer or this blog post .



Check networks

Below are some commands that may be useful in your scenario:

  • specify available networks: docker network ls

  • check which container is using the network bridge

    :docker network inspect --format "{{ json .Containers }}" bridge

  • check container networks: docker inspect --format "{{ json .NetworkSettings.Networks }}" myContainer

Testing the connection

To test the connection, you can create a container with psql

and try to connect to a remote PostgreSQL server, thus setting aside a minimal environment to test your case.

The docker file can be:

FROM ubuntu
RUN apt-get update
RUN apt-get install -y postgresql-client
ENV PGPASSWORD myPassword
CMD psql --host=10.100.100.123 --port=5432 --username=postgres -c "SELECT 'SUCCESS !!!';"

      

Then you can create an image with: docker build -t test-connection .

Finally, you can start the container with: docker run --rm test-connection:latest

If your connection succeeds, then SUCCESS !!! will be printed.

Note: connecting with localhost

as in CMD psql --host=localhost --port=5432 --username=postgres -c "SELECT 'SUCCESS !!!';"

will not work as the localhost from the container is the container itself and will be different from the main one. Therefore, the address must be one that can be found.

Note: if you start your container as a closed container with docker run --rm --net none test-connection:latest

, there will be no other network interface other than loopback and the connection will fail. Just to show how the choice of network can affect the outcome.

+3


source







All Articles