Is Clock "locking" obsolete by Interlocked.CompareExchange <T>?
Summary:
I think that
- wrapping fields representing boolean state into one immutable consumable object
- updating the authoritative link of an object with a call
Interlocked.CompareExchange<T>
- and handle update errors appropriately
provides a kind of concurrency that makes the "locking" construct not only unnecessary, but a truly misleading construct that evades certain realities about concurrency and introduces many new problems into it.
Discussion of the problem:
First, let's look at the main problems with blocking:
- Locks are performance- intensive and should be used in tandem for read and write.
- Blocking blocking performance, obstruction of concurrency and risk of deadlocks .
Consider the ludicrous "blocking" behavior. Whenever there is a need to update a logical set of resources at the same time, we "lock" the set of resources, and we do this with a loosely associated but dedicated lock object that otherwise serves no purpose (red flag # 1).
We then use the "lock" pattern to isolate the area of code where there is a coherent state change in the SET data field, and yet we shoot in the foot by mixing the unbound fields into the same object, leaving them all mutable, and then forcing ourselves into the corner (red flag # 2) where we also have to use locks when reading these various fields, so we won't catch them in an inconsistent state.
There is clearly a serious problem with this design. This is somewhat unstable since it requires careful management of the lock objects (lock order, nested locks, coordination between threads, blocking / waiting on a resource being used by another thread waiting for you to do something, etc.), which depends on the context ... We also hear people talk about avoiding dead ends, "hard" when it's actually very simple: don't steal the shoes of the person you plan to ask to run the race for you!
Decision:
Stop using "lock" altogether. Roll your margins correctly into an immutable / immutable object representing a consistent state or schema. Perhaps it's just a couple of dictionaries to convert to and from display names and internal IDs, or perhaps it's the head of a node of a queue containing a value and a reference to the next object; whatever it is, wrap it in your own object and seal it for consistency.
Recognize a write or update opportunity as an opportunity, detect it when it occurs, and make a context sensitive decision to try or do something else immediately or later rather than block indefinitely.
While blocking seems like an easy way to queue up a task that seems to have to be done, not all threads are so dedicated and self-serving that they can afford to do this, risking the entire system at risk. Not only is it lazy to serialize things with "blocking", but since a side effect of trying to pretend the write should not be interrupted, you block / freeze your thread, so it establishes that it is unresponsive and useless, leaving all others duties in his stubborn expectation to accomplish what he was going to do some time ago, not realizing the fact that helping others is sometimes necessary to fulfill his own duties.
Race conditions are normal when independent, spontaneous actions occur simultaneously, but unlike uncontrolled Ethernet collisions, because programmers have complete control over our "system" (i.e. deterministic digital hardware) and its inputs (no matter how random and how can random be zero, or really be?) and outputs, as well as the memory that stores our system state, so livelock should be a no-problem; in addition, we have atomic operations with memory barriers, which eliminate the fact that many processors can run concurrently.
Summarizing:
- Grab the current state object, destroy its data, and build a new state.
- Understand that other active threads will do the same and might beat you, but everyone is observing an authoritative checkpoint representing the "current" state.
- Use Interlocked.CompareExchange to simultaneously see if the state object you based your work on is still the most current state and replace it with a new one, otherwise crash (because another thread finished first) and take the appropriate corrective actions.
The most important part is how you deal with the failure and get back on your horse. Here we avoid life situations, think too much and do not do enough or do the right thing. I would say that the locks give the illusion that you will never fall off your horse despite riding a panicky run, and while the thread is dreaming in such a fantastic country, the rest of the system could fall apart, collapse and burn.
So, is there something that the "lock" constructor can do that cannot be achieved (better, less volatile) with a locking function using CompareExchange and immutable boolean state objects?
All of this awareness I came up with on my own when I ran into locks intensively, but after doing some searching in another thread Is blocking multithreading anything easier? Someone mentions that locking programming will be very important when we are faced with highly concurrent systems with hundreds of processors, could we not allow high contention locks to be used?
source to share
There are four conditions for race.
- The first condition is that there are memory locations available from more than one thread. Typically, these locations are global / static variables or are heap memory accessed from global / static variables.
- The second condition is that there is a property (often called an invariant) that is associated with these shared memory locations that must be true or valid for the program to function correctly. Typically, the property must be set to true before the update is performed for a correct update.
- The third condition is that the invariant property is not satisfied during some part of the actual update. (This is temporarily invalid or false during some part of the processing).
-
The fourth and final condition that must occur for a race is that another thread accesses memory while the invariant is not running, causing inconsistent or incorrect behavior.
-
If you don't have shared memory space available from multiple threads, or you can write your code to either eliminate this shared memory variable or restrict access to it to only one thread, then there is no possibility of a race condition, and you don't need to than worry. Otherwise, a lock statement or some other synchronization routine is absolutely necessary and cannot be safely ignored.
-
If there is no invariant (say, all you do is write to this shared memory location, and nothing in the stream operation reads its value), then again no problem.
-
If the invariant is never invalid, again no problem. (say shared memory is a datetime field storing the date and time the code was last run, then it can't be invalid if a thread can't write it at all ...
-
To eliminate nbr 4, you need to restrict write access to a block of code that accesses shared memory from more than one thread at a time using locking or some comparable synchronization methodology.
-
"concurrency hit" in this case is not only inevitable, but absolutely necessary. Intelligent analysis of what exactly is shared memory and what exactly your critical "invariant" allows you to code the system to minimize this concurrency "Hit". (i.e. maximally concurrency safe .)
source to share
I would like to know how to accomplish this task using a lock-free programming style? You have multiple worker threads that periodically click on shared task lists for the next job. (currently) They block the list, find the item in the head, remove it, and unblock the list. Consider all error conditions and possible data schedules so that no two threads can work on the same task or that a task is accidentally skipped.
I suspect this code might have a problem of over-complexity and the likelihood of poor performance in case of high contention.
source to share
The big advantage of locking over a CAS operation such as Interlocked.CompareExchange is that you can change multiple memory locations within the lock and all changes will be visible to other threads / processes at the same time.
With CAS, only one variable is atomized. Lockfree code is usually significantly more complex because you can not only update one variable (or two adjacent variables with CAS2) concurrently with other threads, but also be able to handle "crash" conditions when CAS fails. In addition, you need to handle ABA problems and other possible complications.
There are many ways, such as low lock, fine grain lock, striped locks, reader locks, etc., which can make a simple lock code much more friendly.
However, there are many interesting uses for both locking and non-locking code. However, if you REALLY don't know what you are doing, writing your own code without blocking is not for beginners. Use either blocking code or algorithms that have been well tested and tested thoroughly because it is very difficult to find edge conditions that cause many non-blocking attempts to fail.
source to share
I would argue that this is no more outdated than, generally speaking, that pessimistic concurrency is obsolete under optimistic concurrency, or that pattern A is outdated because of pattern B. I think about this context. Lock-free is powerful, but it doesn't make sense to apply it unilaterally, because not every problem is perfect for it. There are trade-offs. However, it would be nice to have a lawless, optimistic general-purpose approach where it has not traditionally been implemented. In short, yes, locking can do something that "cannot be achieved with another approach: present a potentially simpler solution. Again, it can happen that both have the same result if some things don't matter. I suppose which I differentiate a little ...
source to share
In theory, if there is a fixed amount of work, the program that is using Interlocked.CompareExchange
can do without blocking. Unfortunately, when there is high contention, the read / compute-new / compareExchange loop can end up so badly that 100 CPUs each trying to perform one update to a shared item could end up taking longer - in real time - than if one processor was doing 100 updates in sequence. Parallelism won't improve performance - it will kill him. Using a lock to protect a resource would mean that only one processor at a time can update it, but would improve performance to suit the single processor situation.
The only real benefit of blocking-free programming is that the functionality of the system will not be adversely affected if the thread is run for an arbitrary amount of time. It is possible to maintain this advantage while avoiding problems with purity-based software by CompareExchange
using a combination of locks and timeouts. The basic idea is that if there is contention, the resource switches to lock-based synchronization, but if the thread holds the lock for too long, a new lock object will be created and the earlier lock will be ignored. This would mean that if this old thread were still trying to loop throughCompareExchange
, it will fail (and have to start over), but later threads will not be blocked and no bugs will be fixed.
Note that the code required to arbitrate all of the above will be complex and complex, but if you want the system to be reliable under certain failure conditions, such code may be required.
source to share