Skip to content

Commit

Permalink
mm: sched: numa: fix NUMA balancing when !SCHED_DEBUG
Browse files Browse the repository at this point in the history
Commit 3105b86 ("mm: sched: numa: Control enabling and disabling of
NUMA balancing if !SCHED_DEBUG") defined numabalancing_enabled to
control the enabling and disabling of automatic NUMA balancing, but it
is never used.

I believe the intention was to use this in place of sched_feat_numa(NUMA).

Currently, if SCHED_DEBUG is not defined, sched_feat_numa(NUMA) will
never be changed from the initial "false".

Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
kleikamp authored and torvalds committed Jul 31, 2013
1 parent f93f3c4 commit 10e84b9
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -851,7 +851,7 @@ void task_numa_fault(int node, int pages, bool migrated)
{
struct task_struct *p = current;

if (!sched_feat_numa(NUMA))
if (!numabalancing_enabled)
return;

/* FIXME: Allocate task-specific structure for placement policy here */
Expand Down Expand Up @@ -5786,7 +5786,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
entity_tick(cfs_rq, se, queued);
}

if (sched_feat_numa(NUMA))
if (numabalancing_enabled)
task_tick_numa(rq, curr);

update_rq_runnable_avg(rq, 1);
Expand Down

0 comments on commit 10e84b9

Please sign in to comment.