Skip to content

Commit

Permalink
mm: reduce atomic use on use_mm fast path
Browse files Browse the repository at this point in the history
When the mm being switched to matches the active mm, we don't need to
increment and then drop the mm count.  In a simple benchmark this happens
in about 50% of time.  Making that conditional reduces contention on that
cacheline on SMP systems.

Acked-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
mstsirkin authored and torvalds committed Sep 22, 2009
1 parent 3d2d827 commit f68e148
Showing 1 changed file with 6 additions and 3 deletions.
9 changes: 6 additions & 3 deletions mm/mmu_context.c
Original file line number Diff line number Diff line change
Expand Up @@ -26,13 +26,16 @@ void use_mm(struct mm_struct *mm)

task_lock(tsk);
active_mm = tsk->active_mm;
atomic_inc(&mm->mm_count);
if (active_mm != mm) {
atomic_inc(&mm->mm_count);
tsk->active_mm = mm;
}
tsk->mm = mm;
tsk->active_mm = mm;
switch_mm(active_mm, mm, tsk);
task_unlock(tsk);

mmdrop(active_mm);
if (active_mm != mm)
mmdrop(active_mm);
}

/*
Expand Down

0 comments on commit f68e148

Please sign in to comment.