sched/fair: Have task_move_group_fair() unconditionally add the entity load to the...
authorByungchul Park <byungchul.park@lge.com>
Thu, 20 Aug 2015 11:21:57 +0000 (20:21 +0900)
committerIngo Molnar <mingo@kernel.org>
Sun, 13 Sep 2015 07:52:46 +0000 (09:52 +0200)
Currently we conditionally add the entity load to the rq when moving
the task between cgroups.

This doesn't make sense as we always 'migrate' the task between
cgroups, so we should always migrate the load too.

[ The history here is that we used to only migrate the blocked load
  which was only meaningfull when !queued. ]

Signed-off-by: Byungchul Park <byungchul.park@lge.com>
[ Rewrote the changelog. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1440069720-27038-3-git-send-email-byungchul.park@lge.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index a72a71b501de8d1243019fad969da577b9f09c13..959b2ea386b3a15313db5098a47de3ab054e17b4 100644 (file)
@@ -8041,13 +8041,12 @@ static void task_move_group_fair(struct task_struct *p, int queued)
                se->vruntime -= cfs_rq_of(se)->min_vruntime;
        set_task_rq(p, task_cpu(p));
        se->depth = se->parent ? se->parent->depth + 1 : 0;
-       if (!queued) {
-               cfs_rq = cfs_rq_of(se);
+       cfs_rq = cfs_rq_of(se);
+       if (!queued)
                se->vruntime += cfs_rq->min_vruntime;
 
-               /* Virtually synchronize task with its new cfs_rq */
-               attach_entity_load_avg(cfs_rq, se);
-       }
+       /* Virtually synchronize task with its new cfs_rq */
+       attach_entity_load_avg(cfs_rq, se);
 }
 
 void free_fair_sched_group(struct task_group *tg)
This page took 0.028415 seconds and 5 git commands to generate.