C ++: a function that locks a mutex for another function, but can itself execute in parallel

I have a question regarding thread safety and mutexes. I have two functions that might not run at the same time because it might cause problems:

std::mutex mutex;

void A() {
    std::lock_guard<std::mutex> lock(mutex);
    //do something (should't be done while function B is executing)
}

T B() {
    std::lock_guard<std::mutex> lock(mutex);
    //do something (should't be done while function A is executing)
    return something;
}

      

Now the point is that functions A and B do not have to be executed at the same time. This is why I am using a mutex. However, it is fine if function B is called simultaneously from multiple threads. However, this is also prevented by the mutex (and I don't want that). Now, is there a way to ensure that A and B do not run at the same time while allowing function B to run multiple times in parallel?

+3


source to share


3 answers


If the C ++ 14 option is an option, you can use a shared mutex (sometimes called a read-write mutex). Basically, within a function, A()

you would acquire a unique (exclusive, "writer") lock, and within a function, B()

you would acquire a general (non-exclusive, "read") lock.

As long as the shared lock exists, the mutex cannot be acquired exclusively by other threads (but not exclusively acquired); while exclusive locks exist, the mutex cannot be acquired by any other thread.

As a result, you can have multiple threads executing a function at the same time B()

, whereas executing a function A()

prevents both threads A()

and B()

other threads from executing concurrently :



#include <shared_mutex>

std::shared_timed_mutex mutex;

void A() {
    std::unique_lock<std::shared_timed_mutex> lock(mutex);
    //do something (should't be done while function B is executing)
}

T B() {
    std::shared_lock<std::shared_timed_mutex> lock(mutex);
    //do something (should't be done while function A is executing)
    return something;
}

      

Note that some synchronization overhead will always be present even for parallel executions B()

, and whether this will ultimately give you better performance than using simple mutexes depends heavily on what happens inside and outside these functions - always before moving on to a more complex solution.

Boost.Thread provides implementation as well shared_mutex

.

+3


source


You have an option in C ++ 14.
Use std::shared_timed_mutex

.



A

will use lock

, B

will uselock_shared

0


source


You may have a lot of bugs, but since you don't have C ++ 14, you can create a lock-counting wrapper around std::mutex

and use this:

// Lock-counting class
class SharedLock
{
public:
   SharedLock(std::mutex& m) : count(0), shared(m) {}

   friend class Lock;

   // RAII lock
   class Lock
   {
   public:
      Lock(SharedLock& l) : lock(l) { lock.lock(); }
      ~Lock()                       { lock.unlock(); }
   private:
      SharedLock& lock;
   };

private:

   void lock()
   {
      std::lock_guard<std::mutex> guard(internal);
      if (count == 0)
      {
         shared.lock();
      }
      ++count;
   }


   void unlock()
   {
      std::lock_guard<std::mutex> guard(internal);
      --count;
      if (count == 0)
      {
         shared.unlock();
      }
   }

   int count;
   std::mutex& shared;
   std::mutex internal;
};

std::mutex shared_mutex;

void A()
{
   std::lock_guard<std::mutex> lock(shared_mutex);
   // ...
}


void B()
{
   static SharedLock shared_lock(shared_mutex);
   SharedLock::Lock mylock(shared_lock);
   // ...
}

      

... unless you want to dive into Boost, of course.

0


source







All Articles