Reduce the size of .forever log files without interrupting forever Process

The log files (s /root/.forever

) generated forever

have reached a large size and are almost filling the hard disk.

If the log file should have been deleted while the process forever

is still running, forever logs 0

will return undefined

. The only way to resume the current running process forever

is with stop

it and start

again a node script.

Is there a way to just truncate the log file without interrupting the logging or the process forever

?

+3


source to share


2 answers


So Foreverjs will keep writing to the same file descriptor, and ideally will support something that will allow you to send a signal and rotate to a different file.

Without it, which requires a code change in the Forever.js package, your parameters look like this:

Command line version:

  • Make a backup
  • Discard file

cp forever-guid.log backup && :> forever-guid.log;

This has a small risk that if you write to the log file quickly, you will end up writing a log line between the backup and zeroing, resulting in a lost log line.



Use Logrotate w / copytruncate

You can configure logrotate to view the eternal directory of logs to copy and truncate automatically based on file size or time.

Ask your node code to handle this

You can specify the logging code for how many lines are in the log file, and then truncate the copy to avoid potential data loss.

EDIT: I originally thought that split

u truncate

could do the job. They probably can, but the implementation will look very awkward. split

has no good way to split the file into short (original log) and long (backup). truncate

(which besides not always being set) does not reset the pointer to write, so forever just writes the same byte as it, resulting in weird data.

0


source


You can truncate the log file without losing its descriptor ( link ).



cat /dev/null > largefile.txt

      

0


source







All Articles