Using a page file

I am running a script that processes text on the filesystem.

The script is executed in text files (.h, .cpp).

As the script works, I can see that the PF usage increases until it reaches the number of VMs allocated for the page file.

Is there a way to reset the VM during or after a run?

I have another question regarding this (thought it was a different problem): Single sed command for multiple substitutions?

+2


source to share


3 answers


Chunk or package so you can use your memory efficiently instead of just loading everything into memory. If none of your files are large, limit the number of threads loading text from these files into memory. If you are using large files, split them up to process them more efficiently using the memory you have.



+1


source


No, but perhaps you can modify the script to consume less memory.

Update . I tried to reproduce the issue on Linux, corresponding to the script mentioned in another question . In Bash:



while read fileName; do

    echo
    echo -----------------------------------------------
    echo For file $fileName :

    while read matchItem; do
      echo  Searching for  $matchItem
      echo
      sed -i "s/$matchItem/XXXXXXXXX $matchItem XXXXXXXXXXXXXX/" $fileName
    done < allFilesWithH.txt

done < all.txt

      

I used fragments of the protein sequence database (large text file, FASTA format, up to 74 MB) and short peptide sequences for the test (so there were at least 10 substitutions per file). Until it is running, the process is not using significant memory (as I expected). The processor load is about 50% while it is running. Thus, I cannot reproduce the problem.

0


source


The paging file is a system resource and cannot be processed by any user process. In this case, the paging file size is simply a symptom of an application problem — the application exceeds the commit limit. You must solve the problem, not the symptom.

0


source







All Articles