Connect Rails / Unicorn / Nginx container to MySQL container
Related to this thread, I am trying to create 2 containers: 1 with a rails app and another with a MySQL database, but I keep getting Mysql2::Error (Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock'
in my products production.log file after I hit the IP containerhttp://192.168.59.103
When I start the rails container, I try to link them and get an error if I give the wrong MySQL name. What am I missing to successfully bind the containers so that the complete application runs in containers?
Rails container command
docker run --name games-app --link test-mysql:mysql -p 8080 -d -e SECRET_KEY_BASE=test sample_rails_games_app
Here are my files:
Dockerfile
# Publish port 8080
EXPOSE 8080
CMD ["bundle", "exec","unicorn", "-p", "8080"]
CMD ["bunde", "exec", "rake", "db:migrate"]
Rails database.yml (dev and test are the same as in production)
default: &default
adapter: mysql2
encoding: utf8
pool: 5
username: root
password: root
host: localhost
#socket: /tmp/mysql.sock
production:
<<: *default
database: weblog_production
7/31/15 Edit
The docker log shows the unicorn server:
docker logs a13bf7851c6d
I, [2015-07-31T18:10:59.860203 #1] INFO -- : listening on addr=0.0.0.0:8080 fd=9
I, [2015-07-31T18:10:59.860583 #1] INFO -- : worker=0 spawning...
I, [2015-07-31T18:10:59.864143 #1] INFO -- : master process ready
I, [2015-07-31T18:10:59.864859 #7] INFO -- : worker=0 spawned pid=7
I, [2015-07-31T18:10:59.865097 #7] INFO -- : Refreshing Gem list
I, [2015-07-31T18:11:01.796690 #7] INFO -- : worker=0 ready
7/31/15 Solution Thanks to @Rico
-
db:migrate
had problems starting, so I ended up running it manually with the commanddocker run
. Make sure you do this after the container has already been created or during the creation process, as it needs to be bound to the DB container. - This link to the article helped me to understand that my link was not created, therefore it is incorrect to communicate.
- Once I figured out how to make the link exactly, I updated my database.yml with the host and port values
- Use this command to check the names of your env variables
docker run --rm --name <unique-value> --link <db-name> <non-db-image> env
. - Use this to see the meaning of links in your app container
docker inspect -f "{{ .HostConfig.Links }}" <app-name>
source to share
In fact, yours bundle exec unicorn -p 8080
CMD
is replacing bundle exec rake db:migrate
as it is not refundable.
You have to start your first db:migrate
, and you have to start it with a command RUN
as it CMD
is the main command in docker.
But the other problem is with your file database.yml
. You point your db to a db server that runs in the same container as your application. You have to fill in the values โโof yours database.yml
from env variables created after linking your source container (application) to the destination container (server db container). The env variables are created in the source container.
More info here: https://docs.docker.com/userguide/dockerlinks/
So for example:
$ docker run --rm --name web2 --link db:db training/webapp env . . . DB_NAME=/web2/db DB_PORT=tcp://172.17.0.5:5432 DB_PORT_5432_TCP=tcp://172.17.0.5:5432 DB_PORT_5432_TCP_PROTO=tcp DB_PORT_5432_TCP_PORT=5432 DB_PORT_5432_TCP_ADDR=172.17.0.5
Yours database.yml
should look something like this:
default: &default
adapter: mysql2
encoding: utf8
pool: 5
database: <%= ENV['DB_NAME'] %>
username: root
password: root
host: <%= ENV['DB_PORT_5432_TCP_ADDR'] %>
port: <%= ENV['DB_PORT_5432_TCP_PORT'] %>
source to share
Your Dockerfile cannot have 2 CMD commands, in fact only the last one is saved. CMD command executed: `
CMD ["bunde", "exec", "rake", "db: migrate"] `
other,
CMD ["bundle", "exec","unicorn", "-p", "8080"]
is replaced.
See Supervisor
https://docs.docker.com/articles/using_supervisord/
if you want to run more than one router in your container or run 2 containers with different containers
source to share