How do I start a Redis server AND another application inside Docker?

I created a Django app that runs inside a Docker container. I needed to create a thread inside a Django app, so I used Celery and Redis as my Celery database. If I installed redis in a docker image (Ubuntu 14.04):

RUN apt-get update && apt-get -y install redis-server
RUN pip install redis

      

Redis server won't start: Django app throws an exception because the connection was refused on port 6379. If I manually start Redis it works.

If I start the Redis server with the following command, it hangs:

RUN redis-server

      

If I try to tweak the previous line, that doesn't work either:

RUN nohup redis-server &

      

So my question is, is there a way to start Redis in the background and restart it after restarting the Docker container?

The last Docker command is already used with:

CMD uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi

      

+3


source to share


3 answers


RUN

the commands only add new image layers. They are not executed at runtime. Only during the assembly time of the image.

Use CMD

instead. You can combine multiple commands by pushing them into a shell script that is called CMD

:

CMD start.sh

      



In your start.sh

script, you write the following:

#!/bin/bash
nohup redis-server &
uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi

      

+1


source


use a supervisord that will control both processes. The conf file might look like this:



...
[program:redis]
command= /usr/bin/redis-server /srv/redis/redis.conf
stdout_logfile=/var/log/supervisor/redis-server.log
stderr_logfile=/var/log/supervisor/redis-server_err.log
autorestart=true

[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true

      

+4


source


When you start a Docker container, there is always one top-level process. When you start your laptop, this top-level process is an "init" script, systemd or the like. The docker image has an ENTRYPOINT directive. This is a top-level process that runs in your docker container, and whatever you want to run as a child. To run Django, Celery Worker and Redis inside the same Docker container, you will need to start a process that will start all three of them as child processes. As Milan explained, you can set up the Supervisor configuration for this and run the supervisor as a parent process.

Another option is to actually boot the init system. This will get you closer to what you want as it will basically work as if you had a full blown virtual machine. However, you lose many of the benefits of containerization by doing this :)

The easiest way is to run multiple containers using Docker-compose. A container for Django, one for your celery worker and one for Redis (and one for your datastore?) It's pretty easy to set up this way. For example...

# docker-compose.yml
web:
    image: myapp
    command: uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
    links:
      - redis
      - mysql
celeryd:
    image: myapp
    command: celery worker -A myapp.celery
    links:
      - redis
      - mysql
redis:
    image: redis
mysql:
    image: mysql

      

This will give you four containers for your four top-level processes. redis and mysql will appear with dns names "redis" and "mysql" inside your application containers, so instead of pointing to "localhost" you must specify "redis".

There is a lot of good information on the Docker-compose docs

+2


source







All Articles