Concurrency with producer / consumer (sort of) and STL

I have the following situation, I have two threads

  • thread1, which is a worker thread that executes the algorithm until the size of its input list is> 0
  • thread2, which is asynchronous (user driven) and can add items to the input list for processing

Now the loop thread1 does something similar to the following

list input_list
list buffer_list

if (input_list.size() == 0)
  sleep

while (input_list.size() > 0) {
  for (item in input_list) {
    process(item);
    possibly add items to buffer_list
  }
  input_list = buffer_list (or copy it)
  buffer_list = new list (or empty it)
  sleep X ms (100 < X < 500, still have to decide)
}

      

Now thread2 will just add the items to buffer_list (which will be the next pass of the algorithm) and can possibly wake up thread1 if it was stopped.

I'm trying to figure out what kind of multithreading problems this situation might have, assuming I am programming it in C ++ with STL (no assumptions about thread-safe implementation) and I have, of course, access to the standard library (like mutex ).

I would like to avoid any possible delay from thread2 as it is UI related and this will create delays. I was thinking about using 3 lists to avoid sync issues, but I'm not sure yet. I'm still not sure if there is a safer container in the STL according to this particular situation. I don't want to just put the mutex outside and lose so much performance.

Any advice would be greatly appreciated, thanks!

EDIT:

This is what I have been doing so far, wondering if this is thread safe and efficient enough:

std::set<Item> *extBuffer, *innBuffer, *actBuffer;

void thread1Function()
{
  actBuffer->clear();
  sem_wait(&mutex);
  if (!extBuffer->empty())
    std::swap(actBuffer, extBuffer);
  sem_post(&mutex);
  if (!innBuffer->empty())
  {
    if (actBuffer->empty())
      std::swap(innBuffer, actBuffer);
    else if (!innBuffer->empty())
      actBuffer->insert(innBuffer->begin(), innBuffer->end());
  }

  if (!actBuffer->empty())
  {
    set<Item>::iterator it;
    for (it = actBuffer.begin; it != actBuffer.end(); ++it)
      // process
      // possibly innBuffer->insert(...)
  }
}

void thread2Add(Item item)
{
  sem_wait(&mutex);
  extBuffer->insert(item);
  sem_post(&mutex);

}

      

Perhaps I should open another question

+3


source to share


2 answers


If you are worried about thread2 being blocked for a long time because thread1 is holding onto a lock, make sure the proponents of thread1 should only block for a very short time.

This can be easily accomplished if you have two instances of the buffer list. So your attempt is already in the right direction.

Each buffer contains a pointer. One pointer that you use to insert items into a list (thread2), and another pointer is used to process items in another list (thread1). The insert operation of thread2 must be surrounded by a lock.



If thread1 is executed to process all the elements it only needs to swap pointers (for example, with std :: swap), this is a very fast operation that needs to be locked. However, only a swap operation. The actual processing of items is not blocked.

This solution has the following advantages:

  • The blocking on thread1 is always very short, so the amount of time that thread2 can block is minimal
  • There is no persistent dynamic buffer allocation, which is faster and less likely to cause memory leak errors.
+1


source


You just need a mutex to insert, delete, or access the container's size. You can design a class to encapsulate a container that has a mutex. This will simplify things and the class will functionally handle the use of its mutexes. If you restrict access to elements (which is exposed for functions / interfaces) and make the functions small (by simply calling the container classes function enacapsulated in the mutex), they will return relatively quickly. In this case, you will only need the list.

Depending on the system, if you have a semaphore, you can check and see if they are more efficient and use them instead of a mutex. The same concept applies in a different way.



You might want to check out the guard concept in case one of the threads dies, you don't get a deadlock condition.

+1


source







All Articles