-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rewrite Starlark repostory workers using an RPC paradigm... #22215
Conversation
I managed to reproduce some deadlocks during repo fetching with virtual worker threads. One notable trigger was some _other_ repo failing to fetch, which seems to cause Skyframe to try to interrupt other concurrent repo fetches. This _might_ be the cause for a deadlock where we submit a task to the worker executor service, but the task never starts running before it gets cancelled, which causes us to wait forever for a `DONE` signal that never comes. (The worker task puts a `DONE` signal in the queue in a `finally` block -- but we don't even enter the `try`.) I then tried various things to fix this; this PR is an attempt that actually seemed to eliminate the deadlock. Instead of waiting for a `DONE` signal to make sure the worker thread has finished, we now hold on to the executor service, which offers a `close()` method that essentially uninterruptibly waits for any scheduled tasks to terminate, whether or not they have started running. (@justinhorvitz had suggested a similar idea before.) To make sure distinct repo fetches don't interfere with each other, we start a separate worker executor service for each repo fetch instead of making everyone share the same worker executor service. (This is recommended for virtual threads; see https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html#GUID-C0FEE349-D998-4C9D-B032-E01D06BE55F2 for example.) Related: #22003
This makes it possible to simplify the code in a number of ways: * All work (including `setupRepoRoot()` can now be done from the worker thread * The error bubbling hack can now be removed. I surmise that the deadlock occurred because the compute state is closed before error bubbling happens (which otherwise happens only very infrequently). Now that the deadlock is (hopefully) impossible, we don't need that anymore. * Do not use the existence of the worker thread (or lack of it) as state. * Do not use "null" as a signal and denote every different state with a different object.
e918fe9
to
14f672b
Compare
...java/com/google/devtools/build/lib/bazel/repository/starlark/StarlarkRepositoryFunction.java
Outdated
Show resolved
Hide resolved
.../com/google/devtools/build/lib/bazel/repository/starlark/RepoFetchingSkyKeyComputeState.java
Show resolved
Hide resolved
.../com/google/devtools/build/lib/bazel/repository/starlark/RepoFetchingSkyKeyComputeState.java
Show resolved
Hide resolved
.../com/google/devtools/build/lib/bazel/repository/starlark/RepoFetchingSkyKeyComputeState.java
Outdated
Show resolved
Hide resolved
.../com/google/devtools/build/lib/bazel/repository/starlark/RepoFetchingSkyKeyComputeState.java
Outdated
Show resolved
Hide resolved
fac78ab
to
0035ddd
Compare
This time, using an RPC paradigm where the worker thread can request things from the host thread (but not the other way round).
0035ddd
to
a4c6aa8
Compare
PTAL! I'm becoming less and less sure that this is preferable to #22100 but at least it looks principle now (if complicated) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! It's up to you and Xudong to decide which solution you want to move forward with.
Packet<Request> packet = new Packet(seq, request); | ||
logMaybe(String.format("LOG %s/worker: sending request %s", repositoryName, packet)); | ||
requestQueue.put(packet); | ||
// This loop is necessary because the worker and host threads can get out of sync: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed offline, I think if we can start a new worker thread in case of RestartDecision(true)
, so that the worker thread can terminate after 3), this loop can be removed. Anyway, I think this is also good.
I think this would work, but @Wyverald decided to go with #22100 instead. I rate the two about equally in terms of complexity and probability that they fix the deadlock for good, so I think that's a reasonable decision. @Wyverald : If #22100 doesn't work, let's take this pull request out from the closet. In the meantime, I'll close this since I hope it won't be necessary. |
...where the worker thread can request things from the host thread, but not the other way round.
This makes it much easier to reason about what's happening between the threads and still allow us to share no state between the two.