Non-blocking, expected, exclusive accessors

I have a thread safe class that uses a specific resource that needs to be accessed exclusively. In my assessment, it doesn't make sense for the callers of various methods to block on Monitor.Enter

or wait SemaphoreSlim

to access this resource.

For example, I have some "expensive" asynchronous initialization. Since it doesn't make sense to initialize multiple times, whether from multiple threads or one, multiple calls should return immediately (or even throw an exception). Instead, you create init and then spread the instance across multiple threads.

UPDATE 1:

MyClass

uses two NamedPipes

in either direction. The method is InitBeforeDistribute

not an initialization, but a proper connection setup in both directions. It doesn't make sense to make available pipe for streams N

before you set up the connection. Once it is configured, multiple threads can send messages, but only one can actually read / write to the stream. I apologize for confusing this with bad example titles.

UPDATE 2:

If InitBeforeDistribute

implemented SemaphoreSlim(1, 1)

with the correct wait logic (instead of blocking the operation throwing an exception), is the Add / Do Square OK method practical? It doesn't throw a redundant exception (like in InitBeforeDistribute

) but doesn't block?

Below is an example good:

class MyClass
{
    private int m_isIniting = 0; // exclusive access "lock"
    private volatile bool vm_isInited = false; // vol. because other methods will read it

    public async Task InitBeforeDistribute()
    {
        if (Interlocked.Exchange(ref this.m_isIniting, -1) != 0)
            throw new InvalidOperationException(
                "Cannot init concurrently! Did you distribute before init was finished?");

        try
        {
            if (this.vm_isInited)
                return;

            await Task.Delay(5000)      // init asynchronously
                .ConfigureAwait(false);

            this.vm_isInited = true;
        }
        finally
        {
            Interlocked.Exchange(ref this.m_isConnecting, 0);
        }
    }
}

      

Some moments:

  • If there is a case where the lock / pending access to the lock makes perfect sense, then this example doesn't make sense (makes sense, that is).
  • Since I need to wait in a method, I have to use something like SemaphoreSlim if I am using the "correct" locking. Based on the Semaphore example above, I don't have to worry about when I'm done with this. (I've always disliked the idea of โ€‹โ€‹recycling an item being used by multiple threads. This is minor positive, of course.)
  • If the method is called frequently, there may be some performance benefit that must of course be measured.

The above example doesn't make sense in ref. to (3.), so here's another example:

class MyClass
{
    private volatile bool vm_isInited = false; // see above example
    private int m_isWorking = 0; // exclusive access "lock"
    private readonly ConcurrentQueue<Tuple<int, TaskCompletionSource<int>> m_squareWork =
        new ConcurrentQueue<Tuple<int, TaskCompletionSource<int>>();

    public Task<int> AddSquare(int number)
    {
        if (!this.vm_isInited) // see above example
            throw new InvalidOperationException(
                "You forgot to init! Did you already distribute?");

        var work = new Tuple<int, TaskCompletionSource<int>(number, new TaskCompletionSource<int>()
        this.m_squareWork.Enqueue(work);

        Task do = DoSquare();

        return work.Item2.Task;
    }

    private async Task DoSquare()
    {
        if (Interlocked.Exchange(ref this.m_isWorking, -1) != 0)
            return; // let someone else do the work for you

        do
        {
            try
            {
                Tuple<int, TaskCompletionSource<int> work;

                while (this.m_squareWork.TryDequeue(out work))
                {
                    await Task.Delay(5000)      // Limiting resource that can only be
                        .ConfigureAwait(false); // used by one thread at a time.

                    work.Item2.TrySetResult(work.Item1 * work.Item1);
                }
            }
            finally
            {
                Interlocked.Exchange(ref this.m_isWorking, 0);
            }
        } while (this.m_squareWork.Count != 0 &&
            Interlocked.Exchange(ref this.m_isWorking, -1) == 0)
    }
}

      

Are there some of the negative aspects of this "loose" example that I should look out for?

Most questions related to "lock-free" code on SO tend to advise against it, claiming it is for "experts". Rarely (I could be wrong on this), I see suggestions for books / blogs, etc. that one can delve into if anyone is so inclined. If there are any resources I should look into, please share. Any suggestions would be much appreciated!

+3


source to share


2 answers


Update: great article

: Create high performance locks and no-lock code (for .NET) :.




  • The main question about lock-free

    algorythms is not what they are for experts

    .
    The main thing - Do you really need lock-free algorythm here?

    I cannot understand your logic here:

    Since it doesn't make sense to initialize multiple times, whether from multiple threads or one, multiple calls should return immediately (or even throw an exception).

    Why can't your users just wait for the initialization result and use your resource after that? If possible just use class Lazy<T>

    or even Asynchronous Lazy Initialization

    .

  • You should really read the consensus number and CAS operations and why it matters when implementing your own synchronization primitive.

    Your code uses a method Interlocked.Exchange

    that is not CAS

    in real life, since it always exchanges this value and has a consensus number equal to 2

    , This means that a primitive using such a construction will work correctly only for threads 2

    (not in your situation, but still 2

    ).

    I was trying to determine if your code is working correctly for threads 3

    , or if there might be some circumstances that lead your application to a corrupted state, but after 30

    minutes I stopped. And any member of your team will stop, like me, after some time trying to understand your code. This is a waste of time, not just yours, but your team. Don't reinvent the wheel until you need it.

  • My favorite book in the related area Writing High-Performance .NET Code by Ben Watson, and my favorite blog is Stephen Cleary . If you can be more specific about which book you are interested in, I can add a few more links.

  • No blocking in the program makes your application lock-free

    . In a .NET application, you really shouldn't use Exceptions

    for the internal thread of your program. Think that the initialization thread is not scheduled for a while by the OS (for various reasons, no matter what exactly they are).

    In this case, all other threads in your application will gradually wipe out trying to access your share. I can't say this is code lock-free

    . Yes, it does not have any locks, but it does not guarantee the correctness of the program, and therefore it is not blockable by definition .

+1


source


The art of multiprocessor programming by Maurice Herliha and Nira Shavita is a great resource for free and free programming. lock-free is a progress guarantee, different from programming mode, so in order to claim that an algorithm is locked, you need to check or show evidence of a progress guarantee. no blocking in simple terms implies that blocking or stopping one thread does not block the progress of other threads, or that if threads are blocking infinitely often, then there is another thread that makes progress infinitely often.



0


source







All Articles