Laravel Database Queue, "Killed" after a few seconds

I am having a problem in my Laravel project, I am trying to transcode a video file with FFMPEG about 450MB in size, and because of this using long . I am using Queues in Laravel for this.

Due to the configuration of my production environment I have to use database queues , the problem is that the queued job gets killed after about 60 seconds every time I use the command php artisan queue:work

in my stroller box.

There are 4GB of frame available in the Vagrant box, 2D and 3D acceleration, and the team memory_peak_usage()

never lists anything above 20MB throughout the entire process.

I checked php_sapi_name()

and cli as expected, so it shouldn't have any limits at all when it comes to runtime, no matter I went to cli php.ini file and removed any limits again should be sure.

Tried restarting Vagrant, getting Killed after a few seconds.

So, I decided to try creating a Laravel Command for the transcoding process, I hard-coded the file paths and stuff, and now work without Killed ...

Am I missing something about queues? I just run php artisan queue:work

I do not specify a timeout of any type, why is my queue killed?

Thanks in advance for your help.

+3


source to share


1 answer


The default timeout for jobs is 60 seconds, as you learned. The timeout is set using the property --timeout[=TIMEOUT]

, and the timeout is completely disabled using --timeout=0

.



php artisan queue:work --timeout=0

      

+3


source







All Articles