Worker threads stop working after a moment

I have a sequential application that I have parallelized using OpenMP. I just added the following to my main loop:

#pragma omp parallel for default(shared)
for (int i = 0; i < numberOfEmitters; ++i)
{
    computeTrajectoryParams* params = new computeTrajectoryParams;
            // defining params...
    outputs[i] = (int*) ComputeTrajectory(params);

    delete params;
}

      

This seems to work well: at the beginning, all my worker threads are iterating through the loop, everything is going fast, and I have 100% CPU usage (on a quad-core machine). However, after a while, one of the worker threads stops and stays in a function named _vcomp::PersistentThreadFunc

from vcomp90.dll

(file vctools\openmprt\src\ttpool.cpp

), and then another, and so on ... until only the main thread remains.

Does anyone have any idea why this is happening? This starts to happen after about half of the iterations have been completed.

+2


source to share


2 answers


This can depend on the scheduling scheme and the size of the computation in each cycle. If the scheduling is static, each thread is assigned a job before it starts. Each thread will receive 1/4 indices. It is possible that some threads end before others because their work is easier than some other threads (or perhaps they are less loaded with other things).



Try dynamic scheduling and see if it works better.

+6


source


A little comment on your code: if the ComputeTrajectory runtime is measured in ms and you have multiple iterations, you should make sure you have an MP-optimized memory optimizer because you allocate on each iteration and (still today) most allocators have a global pool with global locking.



You can also look at how to get the full selection out of the loop, but there is not enough information to know if this is possible here.

+2


source







All Articles