In celery, how to ensure that tasks are completed when a worker falls

First of all, do not treat this question as a duplicate of this question

I have an environment setup that uses celery

both redis

like broker

and result_backend

. My question is, how can I make sure that when the celery workers crash, all the scheduled tasks are tried again when the celery worker comes back.

I have seen usage advice CELERY_ACKS_LATE = True

so the broker will re-manage tasks until it receives an ACK, but in my case it doesn't work. Whenever I schedule a task, it immediately goes to the worker who saves it until the scheduled execution time. Let me give you an example:

I am planning a task as follows: res=test_task.apply_async(countdown=600)

but immediately working in magazines celery I see something like: Got task from broker: test_task[a137c44e-b08e-4569-8677-f84070873fc0] eta:[2013-01-...]

. Now when I kill the celery worker these scheduled tasks are lost. My settings:

BROKER_URL = "redis://localhost:6379/0"  
CELERY_ALWAYS_EAGER = False  
CELERY_RESULT_BACKEND = "redis://localhost:6379/0"  
CELERY_ACKS_LATE = True

      

+3
scheduled-tasks redis celery celery-task django-celery


source to share


No one has answered this question yet

Check out similar questions:

five
Are the targets set in celery confirmed?
4
Celery restart tasks
3
Scheduling Celery Tasks with Big ETA
2
Celery will not reset memory when the task is over
2
celery lost
1
Celery Flower Monitor Many celery workers in one flower dashboard
1
Celery Job Tasks (Celery, Django and RabbitMQ)
1
Running two celery workers on a server for two django apps
0
The best way for celery to manage / receive this task
0
Celery scheduled task lost from Redis



All Articles
Loading...
X
Show
Funny
Dev
Pics