-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use a Semaphore for signaling in repo fetching #22100
Conversation
.../com/google/devtools/build/lib/bazel/repository/starlark/RepoFetchingSkyKeyComputeState.java
Show resolved
Hide resolved
...java/com/google/devtools/build/lib/bazel/repository/starlark/StarlarkRepositoryFunction.java
Outdated
Show resolved
Hide resolved
.../com/google/devtools/build/lib/bazel/repository/starlark/RepoFetchingSkyKeyComputeState.java
Show resolved
Hide resolved
...java/com/google/devtools/build/lib/bazel/repository/starlark/StarlarkRepositoryFunction.java
Outdated
Show resolved
Hide resolved
.../com/google/devtools/build/lib/bazel/repository/starlark/RepoFetchingSkyKeyComputeState.java
Outdated
Show resolved
Hide resolved
Now that platform threads are not used anymore, why do we even need an Granted, that thread could still swallow interrupts but at least there would be a bit fewer moving parts: that way, no questions would arise about whether we are using the |
that's a good point -- the executor service really has no value anymore, so I'll probably just switch to a anyhow -- watch this space! |
a3f7e59
to
8957e00
Compare
Okay, I gave up. (EDIT: to be clear, I gave up on switching to It seems to me that there's always some tension between the signal queue and the actual lifecycle of the worker thread. The host thread needs to listen on the signal queue while also watching out for the worker thread dying before emitting a signal. Even with the executor service approach, I'm not confident that we're free of deadlocks; for example, if the worker is interrupted due to memory pressure, the host will probably wait forever to take something from the signal queue until it gets interrupted by something. It almost feels like we need some sort of channel select mechanism -- upon any of 1) worker interrupted, 2) host interrupted, 3) worker emits signal, we need to take some action. |
I think I have an explanation for the deadlock you found on #22110. you interrupt the worker thread, then the I can imagine two solutions to this:
I think I'd do (2) for the very practical reason that you have carefully orchestrating a |
I take the "unbounded queue" idea back. The only case where a deadlock should happen is when one thread is waiting in |
There is no
This is not the case, unfortunately; Skyframe can close() the state object on high memory pressure, interrupting the worker thread.
Isn't that what we're doing already? The |
My apologies; instead of " I took your attempt and ran with it and created #22139 . WDYT? At the time I'm writing this, |
...java/com/google/devtools/build/lib/bazel/repository/starlark/StarlarkRepositoryFunction.java
Outdated
Show resolved
Hide resolved
Since you have feedback from other reviewers, and I have no experience with virtual threads, I removed myself as a reviewer. Please let me know if you want me to take a look at anything in particular though. |
I finally found a solution I'm happy with! Please see the updated PR description. This new approach using Semaphores is IMO pretty clean and easy to understand. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of all the solutions, I like this one the best. As appealing as my Thread-based solution is, it somehow always ends up being more complicated than necessary. This one arguably has a lot of moving parts, but at least not a lot of code, which is a win.
.../com/google/devtools/build/lib/bazel/repository/starlark/RepoFetchingSkyKeyComputeState.java
Show resolved
Hide resolved
...java/com/google/devtools/build/lib/bazel/repository/starlark/StarlarkRepositoryFunction.java
Show resolved
Hide resolved
.../com/google/devtools/build/lib/bazel/repository/starlark/RepoFetchingSkyKeyComputeState.java
Show resolved
Hide resolved
I managed to reproduce some deadlocks during repo fetching with virtual worker threads. One notable trigger was some _other_ repo failing to fetch, which seems to cause Skyframe to try to interrupt other concurrent repo fetches. This _might_ be the cause for a deadlock where we submit a task to the worker executor service, but the task never starts running before it gets cancelled, which causes us to wait forever for a `DONE` signal that never comes. (The worker task puts a `DONE` signal in the queue in a `finally` block -- but we don't even enter the `try`.) This PR improves the situation in various ways: 1. Instead of using a `SynchronousQueue` for the signal queue, we now use a Semaphore for signaling. Semaphores have the crucial property that releasing a permit (ie. incrementing the counter) does not block, and thus cannot be interrupted. This means that the worker thread can now reliably send signals the host thread, even when it's interrupted. 2. Instead of using two signals for `DONE` and `RESTART`, we just use the one semaphore for both signals, and rely on `workerFuture.isDone()` to tell whether the worker has finished or is waiting for a fresh Environment. 3. The above requires another change: instead of signaling `DONE` in a `finally` block, we now use a `ListenableFuture` and signal to the semaphore in the worker future's listener. This makes sure that the signaling is performed _after_ the worker future's status changes. (Note that points 2 & 3 aren't the only way to handle this -- we could alternatively just use two semaphores.) 4. Instead of waiting for a `DONE` signal (or, in the new setup, the signal semaphore) to make sure the worker thread has finished, we now hold on to the executor service, which offers a `close()` method that essentially uninterruptibly waits for any scheduled tasks to terminate, whether or not they have started running. (@justinhorvitz had suggested a similar idea before.) To make sure distinct repo fetches don't interfere with each other, we start a separate worker executor service for each repo fetch instead of making everyone share the same worker executor service. (This is recommended for virtual threads; see https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html#GUID-C0FEE349-D998-4C9D-B032-E01D06BE55F2 for example.) And because I now create a separate worker executor service for each repo fetch, it doesn't really make sense to use this for platform threads anymore. So setting `--experimental_worker_for_repo_fetching` to any but `off` will cause us to use virtual threads. Related: #22003 Fixes #21712.
…ressure while the host Skyframe thread is inactive
@Override | ||
public void close() { | ||
var myWorkerFuture = workerFuture; | ||
workerFuture = null; | ||
if (myWorkerFuture != null) { | ||
myWorkerFuture.cancel(true); | ||
} | ||
workerExecutorService.shutdownNow(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: doesn't shutdownNow()
cancel all pending work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes it does. If you're asking why I call shutdownNow()
and future.cancel(true)
: per my reading of the code, shutdownNow()
interrupts the worker thread, but does not technically cancel the future. So the future would end with a state of "finished exceptionally" with an InterruptedException. I think it's better to be consistent here and actually mark the future as canceled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL. Out of curiosity, did you experimentally verify this? It makes some weird sort of sense, but AFAIU each task an ExecutorService
runs is wrapped in a future, so what would be the point of interrupting a task without cancelling the associated future?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't experimentally verified this. But anyhow, the ThreadPerTaskExecutor
does not hold on to the Future
s it returns from its submit()
method; also it does also have an execute()
method that doesn't create a Future
at all.
// Unless we know the worker is waiting on a fresh Environment, we should *always* shut down | ||
// the worker executor and reset the state by the time we finish executing (successfully or | ||
// otherwise). This ensures that 1) no background work happens without our knowledge, and | ||
// 2) if the SkyFunction is re-entered for any reason (for example b/330892334 and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume that in this case, the return value of this SkyFunction will be null, but how? How does the worker thread guarantee that in that case, it will return null?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume that in this case, the return value of this SkyFunction will be null, but how?
It's not null IIUC. For b/330892334 it'll presumably throw on the newly-discovered already-eval'd failing env.getValue()
; for #21238 it's error-bubbling, so it should throw some previously thrown exception (or "bubble up", whatever that may mean).
I added some comments, but I think that they are just asking for reassurance and the code works as it is today. I propose the following course of action:
|
I managed to reproduce some deadlocks during repo fetching with virtual worker threads. One notable trigger was some _other_ repo failing to fetch, which seems to cause Skyframe to try to interrupt other concurrent repo fetches. This _might_ be the cause for a deadlock where we submit a task to the worker executor service, but the task never starts running before it gets cancelled, which causes us to wait forever for a `DONE` signal that never comes. (The worker task puts a `DONE` signal in the queue in a `finally` block -- but we don't even enter the `try`.) This PR improves the situation in various ways: 1. Instead of using a `SynchronousQueue` for the signal queue, we now use a Semaphore for signaling. Semaphores have the crucial property that releasing a permit (ie. incrementing the counter) does not block, and thus cannot be interrupted. This means that the worker thread can now reliably send signals the host thread, even when it's interrupted. 2. Instead of using two signals for `DONE` and `RESTART`, we just use the one semaphore for both signals, and rely on `workerFuture.isDone()` to tell whether the worker has finished or is waiting for a fresh Environment. 3. Instead of signaling `DONE` in a `finally` block, we now use a `ListenableFuture` and signal to the semaphore in the worker future's listener. This makes sure that the signaling is performed _after_ the worker future's status changes, and safeguards against the case where the submitted task never starts running before it gets cancelled. 4. Instead of waiting for a `DONE` signal (or, in the new setup, the signal semaphore) to make sure the worker thread has finished, we now hold on to the executor service, which offers a `close()` method that essentially uninterruptibly waits for any scheduled tasks to terminate, whether or not they have started running. (@justinhorvitz had suggested a similar idea before.) To make sure distinct repo fetches don't interfere with each other, we start a separate worker executor service for each repo fetch instead of making everyone share the same worker executor service. (This is recommended for virtual threads; see https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html#GUID-C0FEE349-D998-4C9D-B032-E01D06BE55F2 for example.) And because I now create a separate worker executor service for each repo fetch, it doesn't really make sense to use this for platform threads anymore. So setting `--experimental_worker_for_repo_fetching` to any but `off` will cause us to use virtual threads. Related: #22003 Fixes #21712. Closes #22100. PiperOrigin-RevId: 630534733 Change-Id: If989bf9cae76abb1579a2b1de896df8e5a63b88d
I managed to reproduce some deadlocks during repo fetching with virtual worker threads. One notable trigger was some _other_ repo failing to fetch, which seems to cause Skyframe to try to interrupt other concurrent repo fetches. This _might_ be the cause for a deadlock where we submit a task to the worker executor service, but the task never starts running before it gets cancelled, which causes us to wait forever for a `DONE` signal that never comes. (The worker task puts a `DONE` signal in the queue in a `finally` block -- but we don't even enter the `try`.) This PR improves the situation in various ways: 1. Instead of using a `SynchronousQueue` for the signal queue, we now use a Semaphore for signaling. Semaphores have the crucial property that releasing a permit (ie. incrementing the counter) does not block, and thus cannot be interrupted. This means that the worker thread can now reliably send signals the host thread, even when it's interrupted. 2. Instead of using two signals for `DONE` and `RESTART`, we just use the one semaphore for both signals, and rely on `workerFuture.isDone()` to tell whether the worker has finished or is waiting for a fresh Environment. 3. Instead of signaling `DONE` in a `finally` block, we now use a `ListenableFuture` and signal to the semaphore in the worker future's listener. This makes sure that the signaling is performed _after_ the worker future's status changes, and safeguards against the case where the submitted task never starts running before it gets cancelled. 4. Instead of waiting for a `DONE` signal (or, in the new setup, the signal semaphore) to make sure the worker thread has finished, we now hold on to the executor service, which offers a `close()` method that essentially uninterruptibly waits for any scheduled tasks to terminate, whether or not they have started running. (@justinhorvitz had suggested a similar idea before.) To make sure distinct repo fetches don't interfere with each other, we start a separate worker executor service for each repo fetch instead of making everyone share the same worker executor service. (This is recommended for virtual threads; see https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html#GUID-C0FEE349-D998-4C9D-B032-E01D06BE55F2 for example.) And because I now create a separate worker executor service for each repo fetch, it doesn't really make sense to use this for platform threads anymore. So setting `--experimental_worker_for_repo_fetching` to any but `off` will cause us to use virtual threads. Related: bazelbuild#22003 Fixes bazelbuild#21712. Closes bazelbuild#22100. PiperOrigin-RevId: 630534733 Change-Id: If989bf9cae76abb1579a2b1de896df8e5a63b88d
Was this cherry picked into 7.2.0rc2? |
I thought it was but you're right -- it seems to not have been: release-7.1.0...release-7.2.0rc2 Very good catch! |
Oh good. I was just confirming since we just upgraded to 7.2.0rc2 and I wanted to make sure this bug fix landed. Thanks! |
🥁 |
I managed to reproduce some deadlocks during repo fetching with virtual worker threads. One notable trigger was some other repo failing to fetch, which seems to cause Skyframe to try to interrupt other concurrent repo fetches. This might be the cause for a deadlock where we submit a task to the worker executor service, but the task never starts running before it gets cancelled, which causes us to wait forever for a
DONE
signal that never comes. (The worker task puts aDONE
signal in the queue in afinally
block -- but we don't even enter thetry
.)This PR improves the situation in various ways:
Instead of using a
SynchronousQueue
for the signal queue, we now use a Semaphore for signaling. Semaphores have the crucial property that releasing a permit (ie. incrementing the counter) does not block, and thus cannot be interrupted. This means that the worker thread can now reliably send signals the host thread, even when it's interrupted.Instead of using two signals for
DONE
andRESTART
, we just use the one semaphore for both signals, and rely onworkerFuture.isDone()
to tell whether the worker has finished or is waiting for a fresh Environment.Instead of signaling
DONE
in afinally
block, we now use aListenableFuture
and signal to the semaphore in the worker future's listener. This makes sure that the signaling is performed after the worker future's status changes, and safeguards against the case where the submitted task never starts running before it gets cancelled.Instead of waiting for a
DONE
signal (or, in the new setup, the signal semaphore) to make sure the worker thread has finished, we now hold on to the executor service, which offers aclose()
method that essentially uninterruptibly waits for any scheduled tasks to terminate, whether or not they have started running. (@justinhorvitz had suggested a similar idea before.) To make sure distinct repo fetches don't interfere with each other, we start a separate worker executor service for each repo fetch instead of making everyone share the same worker executor service. (This is recommended for virtual threads; see https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html#GUID-C0FEE349-D998-4C9D-B032-E01D06BE55F2 for example.)And because I now create a separate worker executor service for each repo fetch, it doesn't really make sense to use this for platform threads anymore. So setting
--experimental_worker_for_repo_fetching
to any butoff
will cause us to use virtual threads.Related: #22003
Fixes #21712.