Skip to content

Commit

Permalink
locking/mutex: Restructure wait loop
Browse files Browse the repository at this point in the history
Doesn't really matter yet, but pull the HANDOFF and trylock out from
under the wait_lock.

The intention is to add an optimistic spin loop here, which requires
we do not hold the wait_lock, so shuffle code around in preparation.

Also clarify the purpose of taking the wait_lock in the wait loop, its
tempting to want to avoid it altogether, but the cancellation cases
need to to avoid losing wakeups.

Suggested-by: Waiman Long <waiman.long@hpe.com>
Tested-by: Jason Low <jason.low2@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Peter Zijlstra authored and Ingo Molnar committed Oct 25, 2016
1 parent 9d659ae commit 5bbd7e6
Showing 1 changed file with 25 additions and 5 deletions.
30 changes: 25 additions & 5 deletions kernel/locking/mutex.c
Original file line number Diff line number Diff line change
Expand Up @@ -631,13 +631,21 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,

lock_contended(&lock->dep_map, ip);

set_task_state(task, state);
for (;;) {
/*
* Once we hold wait_lock, we're serialized against
* mutex_unlock() handing the lock off to us, do a trylock
* before testing the error conditions to make sure we pick up
* the handoff.
*/
if (__mutex_trylock(lock, first))
break;
goto acquired;

/*
* got a signal? (This code gets eliminated in the
* TASK_UNINTERRUPTIBLE case.)
* Check for signals and wound conditions while holding
* wait_lock. This ensures the lock cancellation is ordered
* against mutex_unlock() and wake-ups do not go missing.
*/
if (unlikely(signal_pending_state(state, task))) {
ret = -EINTR;
Expand All @@ -650,16 +658,27 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
goto err;
}

__set_task_state(task, state);
spin_unlock_mutex(&lock->wait_lock, flags);
schedule_preempt_disabled();
spin_lock_mutex(&lock->wait_lock, flags);

if (!first && __mutex_waiter_is_first(lock, &waiter)) {
first = true;
__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
}

set_task_state(task, state);
/*
* Here we order against unlock; we must either see it change
* state back to RUNNING and fall through the next schedule(),
* or we must see its unlock and acquire.
*/
if (__mutex_trylock(lock, first))
break;

spin_lock_mutex(&lock->wait_lock, flags);
}
spin_lock_mutex(&lock->wait_lock, flags);
acquired:
__set_task_state(task, TASK_RUNNING);

mutex_remove_waiter(lock, &waiter, task);
Expand All @@ -682,6 +701,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
return 0;

err:
__set_task_state(task, TASK_RUNNING);
mutex_remove_waiter(lock, &waiter, task);
spin_unlock_mutex(&lock->wait_lock, flags);
debug_mutex_free_waiter(&waiter);
Expand Down

0 comments on commit 5bbd7e6

Please sign in to comment.