Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move each EntryState in the graph under its own Mutex #6074

Closed
stuhood opened this issue Jul 6, 2018 · 0 comments
Closed

Move each EntryState in the graph under its own Mutex #6074

stuhood opened this issue Jul 6, 2018 · 0 comments

Comments

@stuhood
Copy link
Sponsor Member

stuhood commented Jul 6, 2018

In #6013, I said:

From profiling, I believe I have a handle on why we see a performance regression: because Shared has its own internal mutex (with the memory costs that that entails), it was able to be operated on outside of the Graph's mutex. But as implemented, this patch removes all mutexes other than the Graph mutex, and so we end up spending more time under that lock.

After more thought (and given the amount of complexity involved in the code post #6059), I think the clearest choice for allowing us to gain back the concurrency we lost would be to wrap an Entry's state into Arc<Mutex<Entry>>.


The first step of this change would look like replacing:

pub struct Entry<N: Node> {
  node: EntryKey<N>,
  state: EntryState<N>,
}

... with:

pub struct InnerEntry<N: Node> {
  node: EntryKey<N>,
  state: EntryState<N>,
}
pub struct Entry<N: Node> {
  inner: Arc<Mutex<InnerEntry<N>>>,
}

...and fixing all compile errors. We'll likely also want to add #[derive(Clone)] on Entry for convenience.

After the first step, we will have more Mutexes involved, but we will still be acquiring the per-entry Mutex under the Graph mutex. This could/should likely be committed as a separate PR.


The next step will be to take advantage of the fact that the Entry's Mutex is wrapped in an Arc, and to split cloning the Arc so that it can be accessed outside of the Graph lock. This will affect a few codepaths, but the most important place to take advantage of it is in Graph::get, which should swap from something like:

// Get or create the destination, and then insert the dep and return its state.
let mut inner = self.inner.lock().unwrap();
let dst_id = ...

// Declare the dep, and return the state of the destination.
inner.pg.add_edge(src_id, dst_id, ());
if let Some(entry) = inner.entry_for_id_mut(dst_id) {
  entry.get(context, dst_id).map(|(res, _)| res).to_boxed()
} else {
  future::err(N::Error::invalidated()).to_boxed()
}

...to:

let entry: Entry<_> = {
  // Get or create the destination, and then insert the dep.
  let mut inner = self.inner.lock().unwrap();
  let dst_id = ...

  // Declare the dep, and return a clone of the Entry's Arc.
  inner.pg.add_edge(src_id, dst_id, ());
  inner.entry_for_id(dst_id).clone()
};

// Call Entry::get outside the Graph lock
entry.get(context, dst_id).map(|(res, _)| res).to_boxed()

That is, we should grab a reference to the Entry and clone it while while holding the graph lock, then release the graph lock before calling get on the Entry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant