Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hang in 'checking cached actions' #21712

Closed
peakschris opened this issue Mar 15, 2024 · 14 comments
Closed

Hang in 'checking cached actions' #21712

peakschris opened this issue Mar 15, 2024 · 14 comments
Assignees
Labels
P1 I'll work on this now. (Assignee required) team-ExternalDeps External dependency handling, remote repositiories, WORKSPACE file. type: bug

Comments

@peakschris
Copy link

Description of the bug:

I've not got a reliable way to reproduce this yet, but I've been seeing Bazel 7.1 hang during 'checking cached actions', on a 16-core Windows machine. This is quite new. The thing I've changed recently is to write a 'warm commit builder' that, in a loop, checks out latest commit, builds it, then moves onto next. Each new checkout probably changes only 100 files in the source directory. The hang happens every 10 or so builds, and might last 5-20 minutes before it continues. I've reverted to bazel 7.0.1 for now, and will monitor.

Has anyone else seen this recently?

Which category does this issue belong to?

No response

What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.

No response

Which operating system are you running Bazel on?

windows

What is the output of bazel info release?

release 7.1.0

If bazel info release returns development version or (@non-git), tell us how you built Bazel.

No response

What's the output of git remote get-url origin; git rev-parse HEAD ?

No response

Is this a regression? If yes, please try to identify the Bazel commit where the bug was introduced.

No response

Have you found anything relevant by searching the web?

No response

Any other information, logs, or outputs that you want to share?

No response

@peakschris peakschris changed the title Hang in 'checking cached outputs' Hang in 'checking cached actions' Mar 15, 2024
@iancha1992 iancha1992 added more data needed team-Remote-Exec Issues and PRs for the Execution (Remote) team labels Mar 15, 2024
@Wyverald
Copy link
Member

Unfortunately, 'checking cached actions' is just the (poorly chosen) default message when Bazel doesn't really know what's going on.

Could you maybe capture a jstack while it's hanging?

@peakschris
Copy link
Author

I'm also seeing this in 7.0.1.

Attached are jstack captures, every 20 seconds, during a hang. I started capturing around half way through. It had unhung by the last 1 or two. It seems to be related to file hashing?

Chris
hang1.zip

@coeuvre
Copy link
Member

coeuvre commented Mar 19, 2024

It seems like Bazel was indeed busy at checking action cache according to the stack dump. How large is your action cache, how many actions does your build have and how large is your output tree?

The action cache is at $(bazel info output_base)/action_cache.

@iancha1992
Copy link
Member

@bazel-io fork 7.1.2

@iancha1992
Copy link
Member

@bazel-io fork 7.2.0

@keithl-stripe
Copy link

keithl-stripe commented May 1, 2024

I caught a frozen build with Bazel 7.1.1. Its last output was an hour ago:

[72,497 / 72,620] 3460 / 3468 tests; [Prepa] Action datasets/XXX; 39s ... (106 actions, 0 running)
Analyzing: 211082 targets (96735 packages loaded, 794243 targets configured)
[72,546 / 72,620] 3465 / 3468 tests; [Prepa] Action datasets/XXX.pb; 39s ... (64 actions, 0 running)
Analyzing: 211082 targets (96735 packages loaded, 794243 targets configured)
[72,617 / 72,620] 3468 / 3468 tests; Action datasets/all_datasets.pb; 0s remote
Analyzing: 211082 targets (96735 packages loaded, 794243 targets configured)
[72,620 / 72,620] 3468 / 3468 tests; checking cached actions

Bazel is using 0-2% CPU - this seems like a deadlock.

I've attached the stack dump (all 33 MB of it 😄 )

Most skyframe-evaluator threads are stuck in repository fetching:


"skyframe-evaluator-17394" #44414 [26510] daemon prio=5 os_prio=0 cpu=3.13ms elapsed=4143.57s tid=0x00007f9c38045ae0 nid=26510 waiting on condition  [0x00007f86150eb000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@21/Native Method)
	- parking to wait for  <0x00007f9def8f19f8> (a java.util.concurrent.SynchronousQueue$TransferStack)
	at java.util.concurrent.locks.LockSupport.park(java.base@21/Unknown Source)
	at java.util.concurrent.SynchronousQueue$TransferStack$SNode.block(java.base@21/Unknown Source)
	at java.util.concurrent.ForkJoinPool.compensatedBlock(java.base@21/Unknown Source)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@21/Unknown Source)
	at java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.base@21/Unknown Source)
	at java.util.concurrent.SynchronousQueue.take(java.base@21/Unknown Source)
	at com.google.devtools.build.lib.bazel.repository.starlark.StarlarkRepositoryFunction.fetch(StarlarkRepositoryFunction.java:170)
	at com.google.devtools.build.lib.bazel.repository.starlark.StarlarkRepositoryFunction.fetch(StarlarkRepositoryFunction.java:200)
	at com.google.devtools.build.lib.bazel.repository.starlark.StarlarkRepositoryFunction.fetch(StarlarkRepositoryFunction.java:200)
	at com.google.devtools.build.lib.bazel.repository.starlark.StarlarkRepositoryFunction.fetch(StarlarkRepositoryFunction.java:200)
	at com.google.devtools.build.lib.rules.repository.RepositoryDelegatorFunction.fetchRepository(RepositoryDelegatorFunction.java:418)
	at com.google.devtools.build.lib.rules.repository.RepositoryDelegatorFunction.compute(RepositoryDelegatorFunction.java:205)
	at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:461)
	at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:414)
	at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(java.base@21/Unknown Source)
	at java.util.concurrent.ForkJoinTask.doExec(java.base@21/Unknown Source)
	at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(java.base@21/Unknown Source)
	at java.util.concurrent.ForkJoinPool.scan(java.base@21/Unknown Source)
	at java.util.concurrent.ForkJoinPool.runWorker(java.base@21/Unknown Source)
	at java.util.concurrent.ForkJoinWorkerThread.run(java.base@21/Unknown Source)

Oddly, each thread is all waiting on a different lock.

% grep -o 'parking to wait .*TransferStack' ~/Downloads/bazel-cached-actions-hang-stack-dump.txt | wc -l
   11275
% grep -o 'parking to wait .*TransferStack' ~/Downloads/bazel-cached-actions-hang-stack-dump.txt | sort -u | wc -l
   11275

@fmeum
Copy link
Collaborator

fmeum commented May 1, 2024

Likely to be resolved by #22100

@Wyverald Wyverald added P1 I'll work on this now. (Assignee required) team-ExternalDeps External dependency handling, remote repositiories, WORKSPACE file. and removed untriaged team-Remote-Exec Issues and PRs for the Execution (Remote) team awaiting-user-response Awaiting a response from the author labels May 1, 2024
@Wyverald Wyverald self-assigned this May 1, 2024
@Wyverald
Copy link
Member

Wyverald commented May 1, 2024

Thanks for the info, @keithl-stripe . This pretty much confirms this to be the repo worker thing. It's my top priority to fix at the moment for 7.2.0 (along with #22003, #22206, etc).

Wyverald added a commit that referenced this issue May 1, 2024
I managed to reproduce some deadlocks during repo fetching with virtual worker threads. One notable trigger was some _other_ repo failing to fetch, which seems to cause Skyframe to try to interrupt other concurrent repo fetches. This _might_ be the cause for a deadlock where we submit a task to the worker executor service, but the task never starts running before it gets cancelled, which causes us to wait forever for a `DONE` signal that never comes. (The worker task puts a `DONE` signal in the queue in a `finally` block -- but we don't even enter the `try`.)

This PR improves the situation in various ways:

1. Instead of using a `SynchronousQueue` for the signal queue, we now use a Semaphore for signaling. Semaphores have the crucial property that releasing a permit (ie. incrementing the counter) does not block, and thus cannot be interrupted. This means that the worker thread can now reliably send signals the host thread, even when it's interrupted.

2. Instead of using two signals for `DONE` and `RESTART`, we just use the one semaphore for both signals, and rely on `workerFuture.isDone()` to tell whether the worker has finished or is waiting for a fresh Environment.

3. The above requires another change: instead of signaling `DONE` in a `finally` block, we now use a `ListenableFuture` and signal to the semaphore in the worker future's listener. This makes sure that the signaling is performed _after_ the worker future's status changes. (Note that points 2 & 3 aren't the only way to handle this -- we could alternatively just use two semaphores.)

4. Instead of waiting for a `DONE` signal (or, in the new setup, the signal semaphore) to make sure the worker thread has finished, we now hold on to the executor service, which offers a `close()` method that essentially uninterruptibly waits for any scheduled tasks to terminate, whether or not they have started running. (@justinhorvitz had suggested a similar idea before.) To make sure distinct repo fetches don't interfere with each other, we start a separate worker executor service for each repo fetch instead of making everyone share the same worker executor service. (This is recommended for virtual threads; see https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html#GUID-C0FEE349-D998-4C9D-B032-E01D06BE55F2 for example.)

Related: #22003

Fixes #21712.
Wyverald added a commit that referenced this issue May 2, 2024
I managed to reproduce some deadlocks during repo fetching with virtual worker threads. One notable trigger was some _other_ repo failing to fetch, which seems to cause Skyframe to try to interrupt other concurrent repo fetches. This _might_ be the cause for a deadlock where we submit a task to the worker executor service, but the task never starts running before it gets cancelled, which causes us to wait forever for a `DONE` signal that never comes. (The worker task puts a `DONE` signal in the queue in a `finally` block -- but we don't even enter the `try`.)

This PR improves the situation in various ways:

1. Instead of using a `SynchronousQueue` for the signal queue, we now use a Semaphore for signaling. Semaphores have the crucial property that releasing a permit (ie. incrementing the counter) does not block, and thus cannot be interrupted. This means that the worker thread can now reliably send signals the host thread, even when it's interrupted.

2. Instead of using two signals for `DONE` and `RESTART`, we just use the one semaphore for both signals, and rely on `workerFuture.isDone()` to tell whether the worker has finished or is waiting for a fresh Environment.

3. The above requires another change: instead of signaling `DONE` in a `finally` block, we now use a `ListenableFuture` and signal to the semaphore in the worker future's listener. This makes sure that the signaling is performed _after_ the worker future's status changes. (Note that points 2 & 3 aren't the only way to handle this -- we could alternatively just use two semaphores.)

4. Instead of waiting for a `DONE` signal (or, in the new setup, the signal semaphore) to make sure the worker thread has finished, we now hold on to the executor service, which offers a `close()` method that essentially uninterruptibly waits for any scheduled tasks to terminate, whether or not they have started running. (@justinhorvitz had suggested a similar idea before.) To make sure distinct repo fetches don't interfere with each other, we start a separate worker executor service for each repo fetch instead of making everyone share the same worker executor service. (This is recommended for virtual threads; see https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html#GUID-C0FEE349-D998-4C9D-B032-E01D06BE55F2 for example.)

And because I now create a separate worker executor service for each repo fetch, it doesn't really make sense to use this for platform threads anymore. So setting `--experimental_worker_for_repo_fetching` to any but `off` will cause us to use virtual threads.

Related: #22003

Fixes #21712.
Wyverald added a commit that referenced this issue May 6, 2024
I managed to reproduce some deadlocks during repo fetching with virtual worker threads. One notable trigger was some _other_ repo failing to fetch, which seems to cause Skyframe to try to interrupt other concurrent repo fetches. This _might_ be the cause for a deadlock where we submit a task to the worker executor service, but the task never starts running before it gets cancelled, which causes us to wait forever for a `DONE` signal that never comes. (The worker task puts a `DONE` signal in the queue in a `finally` block -- but we don't even enter the `try`.)

This PR improves the situation in various ways:

1. Instead of using a `SynchronousQueue` for the signal queue, we now use a Semaphore for signaling. Semaphores have the crucial property that releasing a permit (ie. incrementing the counter) does not block, and thus cannot be interrupted. This means that the worker thread can now reliably send signals the host thread, even when it's interrupted.

2. Instead of using two signals for `DONE` and `RESTART`, we just use the one semaphore for both signals, and rely on `workerFuture.isDone()` to tell whether the worker has finished or is waiting for a fresh Environment.

3. Instead of signaling `DONE` in a `finally` block, we now use a `ListenableFuture` and signal to the semaphore in the worker future's listener. This makes sure that the signaling is performed _after_ the worker future's status changes, and safeguards against the case where the submitted task never starts running before it gets cancelled.

4. Instead of waiting for a `DONE` signal (or, in the new setup, the signal semaphore) to make sure the worker thread has finished, we now hold on to the executor service, which offers a `close()` method that essentially uninterruptibly waits for any scheduled tasks to terminate, whether or not they have started running. (@justinhorvitz had suggested a similar idea before.) To make sure distinct repo fetches don't interfere with each other, we start a separate worker executor service for each repo fetch instead of making everyone share the same worker executor service. (This is recommended for virtual threads; see https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html#GUID-C0FEE349-D998-4C9D-B032-E01D06BE55F2 for example.)

And because I now create a separate worker executor service for each repo fetch, it doesn't really make sense to use this for platform threads anymore. So setting `--experimental_worker_for_repo_fetching` to any but `off` will cause us to use virtual threads.

Related: #22003

Fixes #21712.

Closes #22100.

PiperOrigin-RevId: 630534733
Change-Id: If989bf9cae76abb1579a2b1de896df8e5a63b88d
Kila2 pushed a commit to Kila2/bazel that referenced this issue May 13, 2024
I managed to reproduce some deadlocks during repo fetching with virtual worker threads. One notable trigger was some _other_ repo failing to fetch, which seems to cause Skyframe to try to interrupt other concurrent repo fetches. This _might_ be the cause for a deadlock where we submit a task to the worker executor service, but the task never starts running before it gets cancelled, which causes us to wait forever for a `DONE` signal that never comes. (The worker task puts a `DONE` signal in the queue in a `finally` block -- but we don't even enter the `try`.)

This PR improves the situation in various ways:

1. Instead of using a `SynchronousQueue` for the signal queue, we now use a Semaphore for signaling. Semaphores have the crucial property that releasing a permit (ie. incrementing the counter) does not block, and thus cannot be interrupted. This means that the worker thread can now reliably send signals the host thread, even when it's interrupted.

2. Instead of using two signals for `DONE` and `RESTART`, we just use the one semaphore for both signals, and rely on `workerFuture.isDone()` to tell whether the worker has finished or is waiting for a fresh Environment.

3. Instead of signaling `DONE` in a `finally` block, we now use a `ListenableFuture` and signal to the semaphore in the worker future's listener. This makes sure that the signaling is performed _after_ the worker future's status changes, and safeguards against the case where the submitted task never starts running before it gets cancelled.

4. Instead of waiting for a `DONE` signal (or, in the new setup, the signal semaphore) to make sure the worker thread has finished, we now hold on to the executor service, which offers a `close()` method that essentially uninterruptibly waits for any scheduled tasks to terminate, whether or not they have started running. (@justinhorvitz had suggested a similar idea before.) To make sure distinct repo fetches don't interfere with each other, we start a separate worker executor service for each repo fetch instead of making everyone share the same worker executor service. (This is recommended for virtual threads; see https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html#GUID-C0FEE349-D998-4C9D-B032-E01D06BE55F2 for example.)

And because I now create a separate worker executor service for each repo fetch, it doesn't really make sense to use this for platform threads anymore. So setting `--experimental_worker_for_repo_fetching` to any but `off` will cause us to use virtual threads.

Related: bazelbuild#22003

Fixes bazelbuild#21712.

Closes bazelbuild#22100.

PiperOrigin-RevId: 630534733
Change-Id: If989bf9cae76abb1579a2b1de896df8e5a63b88d
@iancha1992
Copy link
Member

A fix for this issue has been included in Bazel 7.2.0 RC1. Please test out the release candidate and report any issues as soon as possible.
If you're using Bazelisk, you can point to the latest RC by setting USE_BAZEL_VERSION=7.2.0rc1. Thanks!

@oakad
Copy link

oakad commented May 30, 2024

To my humble opinion you must retract the 7.1 release. This bug is of severe enough nature as to void the basic "tool fit for purpose" assumption: a minor change to a dependency can cause the build job to never terminate. Moreover, it is a proverbial "rubber bomb that keeps jumping", as different teams and systems usually have different version adoption timelines.

@peakschris
Copy link
Author

@oakad I believe a workaround is to set common --experimental_worker_for_repo_fetching=off in .bazelrc. This may help you?

@peakschris
Copy link
Author

@iancha1992 I am no longer seeing this issue in 7.2.0rc1. @Wyverald thank you for the fix!

@oakad
Copy link

oakad commented May 30, 2024

@peakschris Of course there are workarounds, a safer one being to stick to 7.0. However, my point is, that 7.1 is broken by default*. This is a particular problem for various CICD containers where the underlying image gets updated on some random schedule.

  • When installed without much afterthought and applied to the existing workspaces.

@Wyverald
Copy link
Member

I agree, this bug is quite bad indeed. Did you have an idea in mind how we should 'retract' 7.1? Either way, when 7.2.0 is released (hopefully early next week), most automated update tools with point to that instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P1 I'll work on this now. (Assignee required) team-ExternalDeps External dependency handling, remote repositiories, WORKSPACE file. type: bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants