Browse Source

perf/core: Explain perf_sched_mutex

To clarify why atomic_inc_return(&perf_sched_events) is not sufficient and
a mutex is needed to order static branch enabling vs the atomic counter
increment, this adds a comment with a short explanation.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170829140103.6563-1-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Alexander Shishkin 8 years ago
parent
commit
5bce9db189
1 changed files with 5 additions and 0 deletions
  1. 5 0
      kernel/events/core.c

+ 5 - 0
kernel/events/core.c

@@ -9394,6 +9394,11 @@ static void account_event(struct perf_event *event)
 		inc = true;
 		inc = true;
 
 
 	if (inc) {
 	if (inc) {
+		/*
+		 * We need the mutex here because static_branch_enable()
+		 * must complete *before* the perf_sched_count increment
+		 * becomes visible.
+		 */
 		if (atomic_inc_not_zero(&perf_sched_count))
 		if (atomic_inc_not_zero(&perf_sched_count))
 			goto enabled;
 			goto enabled;