From: Oleg Nesterov Date: Wed, 17 Oct 2007 06:30:56 +0000 (-0700) Subject: migration_call(CPU_DEAD): use spin_lock_irq() instead of task_rq_lock() X-Git-Url: http://drtracing.org/?a=commitdiff_plain;ds=sidebyside;h=d2da272a4e581e831e3567a37ef167686f1ea1d3;p=deliverable%2Flinux.git migration_call(CPU_DEAD): use spin_lock_irq() instead of task_rq_lock() Change migration_call(CPU_DEAD) to use direct spin_lock_irq() instead of task_rq_lock(rq->idle), rq->idle can't change its task_rq(). This makes the code a bit more symmetrical with migrate_dead_tasks()'s path which uses spin_lock_irq/spin_unlock_irq. Signed-off-by: Oleg Nesterov Cc: Cliff Wickman Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Srivatsa Vaddagiri Cc: Akinobu Mita Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/kernel/sched.c b/kernel/sched.c index c747bc9f3c24..c4889abc00b6 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -5457,14 +5457,14 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu) kthread_stop(rq->migration_thread); rq->migration_thread = NULL; /* Idle task back to normal (off runqueue, low prio) */ - rq = task_rq_lock(rq->idle, &flags); + spin_lock_irq(&rq->lock); update_rq_clock(rq); deactivate_task(rq, rq->idle, 0); rq->idle->static_prio = MAX_PRIO; __setscheduler(rq, rq->idle, SCHED_NORMAL, 0); rq->idle->sched_class = &idle_sched_class; migrate_dead_tasks(cpu); - task_rq_unlock(rq, &flags); + spin_unlock_irq(&rq->lock); migrate_nr_uninterruptible(rq); BUG_ON(rq->nr_running != 0);