Browse Source

sched/fair: Increase PELT accuracy for small tasks

We truncate (and loose) the lower 10 bits of runtime in
___update_load_avg(), this means there's a consistent bias to
under-account tasks. This is esp. significant for small tasks.

Cure this by only forwarding last_update_time to the point we've
actually accounted for, leaving the remainder for the next time.

Reported-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Peter Zijlstra 8 years ago
parent
commit
bb0bd044e6
1 changed files with 2 additions and 1 deletions
  1. 2 1
      kernel/sched/fair.c

+ 2 - 1
kernel/sched/fair.c

@@ -2915,7 +2915,8 @@ ___update_load_avg(u64 now, int cpu, struct sched_avg *sa,
 	delta >>= 10;
 	delta >>= 10;
 	if (!delta)
 	if (!delta)
 		return 0;
 		return 0;
-	sa->last_update_time = now;
+
+	sa->last_update_time += delta << 10;
 
 
 	/*
 	/*
 	 * Now we know we crossed measurement unit boundaries. The *_avg
 	 * Now we know we crossed measurement unit boundaries. The *_avg