Эх сурвалжийг харах

locking/qspinlock: Avoid redundant read of next pointer

With optimistic prefetch of the next node cacheline, the next pointer
may have been properly inititalized. As a result, the reading
of node->next in the contended path may be redundant. This patch
eliminates the redundant read if the next pointer value is not NULL.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Douglas Hatch <doug.hatch@hpe.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hpe.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1447114167-47185-4-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Waiman Long 9 жил өмнө
parent
commit
aa68744f80

+ 6 - 3
kernel/locking/qspinlock.c

@@ -396,6 +396,7 @@ queue:
 	 * p,*,* -> n,*,*
 	 * p,*,* -> n,*,*
 	 */
 	 */
 	old = xchg_tail(lock, tail);
 	old = xchg_tail(lock, tail);
+	next = NULL;
 
 
 	/*
 	/*
 	 * if there was a previous node; link it and wait until reaching the
 	 * if there was a previous node; link it and wait until reaching the
@@ -463,10 +464,12 @@ queue:
 	}
 	}
 
 
 	/*
 	/*
-	 * contended path; wait for next, release.
+	 * contended path; wait for next if not observed yet, release.
 	 */
 	 */
-	while (!(next = READ_ONCE(node->next)))
-		cpu_relax();
+	if (!next) {
+		while (!(next = READ_ONCE(node->next)))
+			cpu_relax();
+	}
 
 
 	arch_mcs_spin_unlock_contended(&next->locked);
 	arch_mcs_spin_unlock_contended(&next->locked);
 	pv_kick_node(lock, next);
 	pv_kick_node(lock, next);