Reducing CPU processing time with nice?
My hosting provider (pairNetworks) has specific rules for scripts running on the server. I'm trying to compress a file for backup purposes and would ideally use bzip2 to take advantage of its AWESOME compression ratio. However, when trying to compress this 90MB file, the process sometimes starts within 1.5 minutes. One of the resource rules is that a script can only run for 30 seconds of CPU.
If I use a nice command for a "nicefy" process, does it ruin the amount of total CPU processing time? Is there any other command I could use instead of the good one? Or will I have to use another compression utility that doesn't take that long?
Thank!
EDIT: This is what their support page says:
- Start any process that requires more than 16 MB of memory space.
- Run any program that requires more than 30 programmed seconds.
EDIT: I am running it in a bash script from the command line
source to share
nice will change the priority of the process and thus get its cpu seconds sooner (or later), so if the rule really refers to cpu seconds, as you state in your question, nice won't serve you at all, it will just be killed at a different time.
As for the solution, you can try splitting the file into three 30MB pieces (see the split (1) section) that you can compress at the appointed time. Then you unclench and use the cat to pick up the pieces. Depending on whether it is binary or text, you can use the -l or -b arguments to split.
source to share
No, it nice
only affects how your process is scheduled. Simply put, a process that takes 30 CPU seconds will always take 30 CPU seconds, even if it is unloaded within a few hours.
I always get a thrill when I load all the cores of my machine with some massive processing, but they are all sliced. I really enjoy seeing the CPU monitor fail when I surf the internet without any noticeable lag.
source to share