sched: fix niced_granularity() shift
fix niced_granularity(). This resulted in under-scheduling for
CPU-bound negative nice level tasks (and this in turn caused
higher than necessary latencies in nice-0 tasks).
Signed-off-by: Ingo Molnar <mingo@elte.hu>
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index ce39282..810b52d 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -291,7 +291,7 @@
/*
* It will always fit into 'long':
*/
- return (long) (tmp >> WMULT_SHIFT);
+ return (long) (tmp >> (WMULT_SHIFT-NICE_0_SHIFT));
}
static inline void