sched: Fix unregister_fair_sched_group()
authorPaul Turner <pjt@google.com>
Tue, 30 Nov 2010 00:55:40 +0000 (16:55 -0800)
committerIngo Molnar <mingo@elte.hu>
Tue, 30 Nov 2010 09:07:10 +0000 (10:07 +0100)
In the flipping and flopping between calling
unregister_fair_sched_group() on a per-cpu versus per-group basis
we ended up in a bad state.

Remove from the list for the passed cpu as opposed to some
arbitrary index.

( This fixes explosions w/ autogroup as well as a group
  creation/destruction stress test. )

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Turner <pjt@google.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <20101130005740.080828123@google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/sched.c

index 35a6373f12653fa39f99c1e2f72af438bd997809..66ef5790d932779fc5f119b129f3831f7cdbb3b1 100644 (file)
@@ -8085,7 +8085,6 @@ static inline void unregister_fair_sched_group(struct task_group *tg, int cpu)
 {
        struct rq *rq = cpu_rq(cpu);
        unsigned long flags;
-       int i;
 
        /*
        * Only empty task groups can be destroyed; so we can speculatively
@@ -8095,7 +8094,7 @@ static inline void unregister_fair_sched_group(struct task_group *tg, int cpu)
                return;
 
        raw_spin_lock_irqsave(&rq->lock, flags);
-       list_del_leaf_cfs_rq(tg->cfs_rq[i]);
+       list_del_leaf_cfs_rq(tg->cfs_rq[cpu]);
        raw_spin_unlock_irqrestore(&rq->lock, flags);
 }
 #else /* !CONFG_FAIR_GROUP_SCHED */
This page took 0.049554 seconds and 5 git commands to generate.