Kernel rectangle allows for locking before unlocking

When I discussed the behavior of spinlocks in Uni- and SMP kernels with some colleagues, we dived into the code and found a line that really surprised us, and we cannot understand why it is done this way.

short calltrace to show where they came from:

spin_lock calls raw_spin_lock ,

raw_spin_lock calls _raw_spin_lock and

on a uniprocessor system, _raw_spin_lock #defined as __LOCK

__ LOCK is the definition:

#define __LOCK(lock) \
  do { preempt_disable(); ___LOCK(lock); } while (0)

      

So far so good. We will disable prevention by increasing the kernel task blocking counter. I guess this is for performance reasons: since you shouldn't hold the spinlock for more than a short time, you should just end your critical section instead of being aborted, and perhaps another task will start its schedule while you are waiting for finishing.

However, now we finally come to my question. The corresponding unlock code is as follows:

#define __UNLOCK(lock) \
  do { preempt_enable(); ___UNLOCK(lock); } while (0)

      

Why did you call preempt_enable () before ___UNLOCK? This seems very unintuitive to us, because you can be unloaded immediately after calling preempt_enable without being able to release your spinlock. It looks like all of the preempt_disable / preempt_enable logic is somewhat inefficient, especially since preempt_disable specifically checks during its call to see if the blocking counter is 0 again and then calls the scheduler. It seems to us that it would be much more appropriate to release the lock first and then decrease the lock count and therefore potentially enable scheduling again.

What are we missing? What is the idea behind calling preempt_enable before ___UNLOCK instead of the other?

+3


source to share


1 answer


You are studying a uni-processor. The comment spinlock_api_up.h

says ( http://lxr.free-electrons.com/source/include/linux/spinlock_api_up.h#L21 ):

/*
 * In the UP-nondebug case there no real locking going on, so the
 * only thing we have to do is to keep the preempt counts and irq
 * flags straight, to suppress compiler warnings of unused lock
 * variables, and to add the proper checker annotations:
 */

      



Macros ___LOCK

and ___UNLOCK

exist for annotation purposes, and if __CHECKER__

not defined (it is defined sparse

) it ends up for compilation.

In other words, preempt_enable()

and preempt_disable()

are the ones that do blocking in the case of a single processor.

+3


source







All Articles