Script unloading post-response function: techniques and best practices?

Firstly,

setting:

I have a script that performs several tasks after the user clicks the download button, which sends the script the data it needs. Now this part is currently required, at the moment we have no way to disable loading and retrieving from source in real time.

This section is intentionally long to make a point. Pass it out if you hate this

At the moment, the data is parsed from a really funky source using a regular expression and then split into an array. It then checks the DB for any data already in the data date range. If the date ranges of the data do not already exist in the DB, they insert the data and output success to the user (there are also some security checks, data source check and basic validation check) ... If the data does exist, the script then gets the data already in the DB, finds the differences between the two sets, deletes old data that doesn't match, adds new data, and then sends an email to each person affected by these changes (one email per person with all the relevant changes in the specified email, which is a whole different step).The email addresses are pulled using an LDAP lookup, as our DB has its own work email address, but LDAP has its own personal email address which ensures it gets the email before they come the next day and get caught not knowing them. Finally, the data loader said, "Changes have been made, emails sent." in fact they all care.

Now I can add the Google Calendar API that hosts data (when scheduling data) for a Google Calendar user. I would do this on my calendar, but I thought I could clothe Google APIs before setting up WebDav for Exchange.

</backstory>

Now!

Practical question

At this point, before integrating with Google, the script takes no more than one and a half seconds to run. It's pretty impressive, at least I think so (the server, not my coding). But Google's bit, in benchmarks, is SLOOOOW. We can probably fix this, but that begs a bigger question ...

What is the best way to disable some of the work after the user has received confirmation that the database has been updated? This is the part that interests him the most, and the part that is most critical. Email notifications and Google Calendar updates are only available to those affected by the download, and if there are problems with these notifications, he will hear about it (and then I will hear about it) regardless of the script first informing him.

So, is there a way, for example, to run a cronjob called by the last execution of the script? Can PHP create cronjobs with capability exec()

? Is there a normalized way to handle post-execution that needs to be done?

Any advice on this is really appreciated. I feel like the scripts are overblown reflect my stage of development and the need for me to finally know how to do division of labor in web applications.

But I am also worried that this is not done, since the user needs to know when all tasks will be completed, etc. So this calls:

Best practice / more subjective question

Basically, there is an idea that progress bars, real-time offloading and other ways to bind the user to a script - when combined with code optimization, of course - so much the better, the more -preferred and then just said, "We're done on your part, if you need us, we will notify users "and so on. etc.

Are there any BIG things to avoid (other than clearly not giving the user any feedback)?

Thanks for reading. The coding part is critical, so don't feel obligated to cover the second part or forget to cover the coding part!

+2


source to share


2 answers


There are several ways to do this. You can exec () as above, but you might run into a DoS situation if there are too many clicks to submit. the pcntl extension is arguably better at handling such processes. Check this post to see the discussion (there are 3 parts).

You can use Javascript to send a second ajax message, which subsequently runs the corresponding worker script. By using ignore_user_abort () and sending Content-Length, the browser may shutdown early, but your apache process will continue to run and process your data. Upside potential is not forkbomb potential, Downside - this will open up more apache processes.

Another option is to use cron in the background, which looks at the process queue table for things to be done "later" - you insert items into this table in the front panel, delete them on the server during processing (see Zend_Queue ).



Another is to use a more distributed work structure like gearmand - which can handle items on other machines.

It all depends on your general capabilities and requirements.

+1


source


The cron job is good for this. If all you want to do when the user uploads data is saying "Hello, thanks for the data!". then it will be good.

If you prefer a more direct approach, you can use exec()

to start a background process. On Linux it will look something like this:

exec("php /path/to/your/worker/script.php >/dev/null &");

      



Part &

says, "Take me to the back." The part >/dev/null

redirects the output to a black hole. As far as handling all errors and notifying the appropriate parties, it all depends on the design of your working script.

For a more flexible cross-platform approach, check out the PHP Manual post

+2


source







All Articles