Celery worker exited prematurely, does not call on_failure
I have the following code:
class StatusTask(automata_celery.Task):
def on_success(self, retval, task_id, args, kwargs):
with app.app_context():
cloaker = Cloaker.query.get(args[0])
cloaker.status = RemoteStatus.LAUNCHED
db.session.commit()
def on_failure(self, exc, task_id, args, kwargs, einfo):
with app.app_context():
cloaker = Cloaker.query.get(args[0])
cloaker.status = RemoteStatus.ERROR
db.session.commit()
@celery.task(base=StatusTask)
def deploy_cloaker(cloaker_id):
"""To prevent launching while we are launching, we will
disable launching until the cloaker status is LAUNCHED
"""
cloaker = Cloaker.query.get(cloaker_id)
if not cloaker.can_launch():
return
cloaker.status = RemoteStatus.LAUNCHING
db.session.commit()
host = cloaker.server.ssh_user + '@' + cloaker.server.ip
execute(fabric_deploy_cloaker, cloaker, hosts=host)
def fabric_deploy_cloaker(cloaker):
domain = cloaker.domain
sudo('rm -rf /var/www/%s/html' % domain) # Restartable process
sudo('mkdir -p /var/www/%s/html' % domain)
When I put the wrong ip for my fabric in ssh (1.2.3.4), the celery worker exits prematurely, but doesn't execute the on_failure handler.
Take a look at the log it generates in the celery desktop window:
[2017-07-31 01:04:45,231: WARNING/PoolWorker-8] [root@1.2.3.45] Executing task 'fabric_deploy_cloaker'
[2017-07-31 01:04:45,231: WARNING/PoolWorker-8] [root@1.2.3.45] sudo: rm -rf /var/www/google.com/html
[2017-07-31 01:04:55,328: WARNING/PoolWorker-8] Fatal error: Timed out trying to connect to 1.2.3.45 (tried 1 time)
Underlying exception:
timed out
[2017-07-31 01:04:55,328: WARNING/PoolWorker-8] Aborting.
[2017-07-31 01:04:59,126: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: exitcode 0.',)
Traceback (most recent call last):
File "/Users/vng/.virtualenvs/AutomataHeroku/lib/python2.7/site-packages/billiard/pool.py", line 1224, in mark_as_worker_lost
human_status(exitcode)),
WorkerLostError: Worker exited prematurely: exitcode 0.
However, when I check the status of this task, I see the following:
state=FAILURE status=FAILURE message=Worker exited prematurely: exitcode 0.
How can I handle this error gracefully?
My application needs to set cloaker.status to START or ERROR so that my end users can restart this task manually.
source to share
I faced the same problem in my project and found two possible workarounds:
First, avoid duplication (and synchronization!) celery.state
And your own application state RemoteStatus.LAUNCHED
. You will need to save AsyncResult
from apply_async()
or at least the task id .
The second is the wrapping of actions that can result WorkerLostError
in a try / except:
host = cloaker.server.ssh_user + '@' + cloaker.server.ip
try:
assert_execute(fabric_deploy_cloaker, cloaker, hosts=host)
except Exception:
raise FabricDeployError("Something went wrong")
else:
execute(fabric_deploy_cloaker, cloaker, hosts=host)
source to share