Forcing new database connection on SSL SYSCALL error and CONN_MAX_AGE> 0

Using Django 1.7 and Celery on Heroku with Postgres and RabbitMQ.

I recently set a parameter CONN_MAX_AGE

in Django to 60

or so that I can start merging database connections. This worked fine until I discovered an issue where if for any reason the database connection was killed, Celery would keep using a bad database connection, consuming tasks, but immediately throwing the following error on each task:

OperationalError: SSL SYSCALL error: Bad file descriptor

      

I would like to maintain the pooling of database connections, but this has happened multiple times and I obviously cannot let Celeriad fail accidentally. How can I get Django (or Celery) to force a new database connection only when this error is hit?

(Alternatively, another idea I had was to get the celery worker to work with a modified settings.py

one that CONN_MAX_AGE=0

only installs for Celery ... but that looks a lot like the wrong way to do it.)

Note that this StackOverflow question seems to solve the problem with Rails, but I haven't found an equivalent for Django: On Heroku, Cedar, with Unicorn: Getting ActiveRecord :: StatementInvalid: PGError: SSL SYSCALL error: EOF encountered

+3


source to share


1 answer


I had the same problem and traced it down to a combination of CONN_MAX_AGE

and CELERYD_MAX_TASKS_PER_CHILD

. At this point, it became apparent that it must have something to do with celery not closing connections properly when the worker is replaced, and from that I found this bug report: https://github.com/celery/celery / issues / 2453



Switching to celery 3.1.18 seems to have solved the problem for me.

+1


source







All Articles