Why Java StampedLock is faster than ReentrantReadWriteLock

This comparison of StampedLock and other locks shows that StampedLock is the fastest as the competition grows. However, in this and other articles, no one points out why this happens faster. Seems to use the same CAS semantics as other types of locks? Can anyone explain why it is faster as competition grows?
For example, below in this code writeLock blocks not only other writeLocks, but also readLocks. At the moment I am not dealing with optimistic ReadLocks etc. I just wrote writeLock .. what is the advantage and how it is faster than ReentrantLock (plus it doesn't even have a re-entry).

    public static void main(String[]args)throws Exception{
    StampedLock lk = new StampedLock();

    new Thread(() -> {
        long stamp = lk.writeLock();
        try {
            out.println("locked.. sleeping");
            TimeUnit.SECONDS.sleep(5);
            lk.unlock(stamp);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }).start();

    new Thread(() -> {
        long stamp = lk.writeLock();
        try {
            out.println("locked.. sleeping");
            TimeUnit.SECONDS.sleep(5);
            lk.unlock(stamp);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }).start();

}

      

+3


source to share


1 answer


To be clear, StampedLock reads much faster when the competition rises. Writers are a little faster, but not as close as they read. I'll explain why.

In most cases, with read and write locks, there are much fewer writes. However, despite this, every time you purchase readLock()

on ReentrantReadWriteLock

, you need to increase the number of readers. This invalidates the cache on all cores with this lock.

Under heavy assertion, this can lead to a significant slowdown in reading. Reading should be fast, readLock()

we do not need to update the variable during execution , this counter is intuitive.

What if instead we have a print, or say a version? One that is updated only once per read iteration.



What this means to us is that if only one thread updates the stamp value (say after writing), all read threads will execute cache hits when they want to read on the lock. This disables cache invalidation and allows locking to perform more appropriately than RRWL.

The pattern for use is StampedLock

similar to the blocking-on-use algorithm tryOptimisticRead

(like CAS)

  • Get stamp
  • Read value
  • Has the seal been changed?
    • Yes, try again or read the lock.
    • No, we're good, let's move on.
+1


source







All Articles