Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cooperative Parallelism #10443

Open
kastiglione opened this issue Dec 18, 2019 · 10 comments
Open

Cooperative Parallelism #10443

kastiglione opened this issue Dec 18, 2019 · 10 comments
Labels
P2 We'll consider working on this in future. (Assignee optional) team-Local-Exec Issues and PRs for the Execution (Local) team type: feature request

Comments

@kastiglione
Copy link
Contributor

kastiglione commented Dec 18, 2019

Description of the problem / feature request:

This is an umbrella issue of problems that arise from using build tools that have their own internal parallelism.

In this Google Groups thread, @jmmv asked to file an issue about this:

https://groups.google.com/d/msg/bazel-discuss/_oHaU50P5Rg/imx5Y49MAwAJ

A little context: swiftc is the swift compiler driver. It's a non-traditional compiler, it doesn't build one source file at time, it builds one module of N source files at time. swiftc spawns "swift frontend" invocations, and the number of spawned processes is very often >1.

There are two related problems:

  1. Tools that perform parallel sub-actions cannot express this use of parallelism to bazel
  2. Actions have no API through which they can specify maximum parallelism

In the first case, it would be good if the action API could express to Bazel how much parallelism is used by an action. This avoids the problem of N bazel actions each running some M sub-actions each.

In the second case, it would be good if the action API could express a range of parallelism an action is capable of using. This would really help the performance of bottleneck actions in the critical path. For example, Bazel could see that it's not using its full amount of jobs, and donate the extra parallelism to the bottleneck action. We see this as particularly useful at the tail end of builds, where there are fewer targets left to build. This problem shows up even more in incremental builds, where the action graph is often much more flat, even linear.

As @allevato pointed out in the google groups thread, this would require some way for actions to pass args that are known not to affect output, such as a -j<N> flag. This would also need to preserve the cache keys.

Feature requests: what underlying problem are you trying to solve with this feature?

This feature allows us to avoid two current problems:

  1. prevent oversubscribing cpu during the build
  2. prevent slower-than-necessary builds that are caused by wasted parallelism

The first issue can happen with any swift module over 25 files. The default batching logic creates one swift frontend for each group of 25 files. A swift module with 100 files will spawn 4 sub-actions, unbeknownst to Bazel.

As mentioned, the second case is something that causes slowdowns for incremental development builds.

Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.

If needed, I can make a rules_swift project that demonstrates the issue.

We see the problem in our build by looking at --experimental_generate_json_trace_profile and by comparing to Xcode's builds, which can sometimes be faster due to its seemingly hard code use of -j8.

What operating system are you running Bazel on?

macOS

What's the output of bazel info release?

release 1.2.0

Have you found anything relevant by searching the web?

As mentioned above, a small amount of discussion happened on Google Groups:

https://groups.google.com/d/msg/bazel-discuss/_oHaU50P5Rg/imx5Y49MAwAJ

I've also posted a general (non-bazel) question to the Swift Forums.

https://forums.swift.org/t/globally-optimized-build-parallelism/31802/2

@jin jin added team-Local-Exec Issues and PRs for the Execution (Local) team untriaged labels Dec 20, 2019
@rupertks
Copy link
Contributor

rupertks commented Dec 20, 2019

Thanks for creating this issue!

Since @jmmv specifically asked for it to be created I am removing it from the untriaged label and giving it an initial P2 priority

@jmmv
Copy link
Contributor

jmmv commented May 14, 2020

Just one more addition since I just closed #11275: if we do this and explicitly tell an action that we should use X threads, we also have to go the other way and ensuring the action doesn't use more than X threads (1 in the general case!) when told so.

@mzeren-vmw
Copy link
Contributor

mzeren-vmw commented Oct 8, 2020

we also have to go the other way and ensuring the action doesn't use more than X threads (1 in the general case!) when told so.

I don't see this as a requirement, at least not until someone provides a use case. We have a local patch that lets us set concurrency and we have cases where an action may peek briefly at 4 threads but empirically has a steady state of 2 threads, for example. A multi-threaded process does not usually consume cores in 100% core increments.

@motiejus
Copy link

motiejus commented Dec 28, 2020

Consider xz -T0, which uses all available cores. Assume:

genrule(
    name = name+"-xz",
    srcs = [name],
    outs = [name + ".xz"],
    cmd = "xz -T0 -f $< > $@",
)

I would like to tell Bazel that "this rule uses N-1 workers, where N is the number of available cores"; I would still like to leave 1-2 slots, in case they are IO bound.

@pauldraper
Copy link
Contributor

Note for comparison: GNU make supports this via its "jobserver": https://www.gnu.org/software/make/manual/html_node/Job-Slots.html

@brentleyjones
Copy link
Contributor

xcodebuild is using the swift driver library to accomplish this: https://twitter.com/BenchR/status/1460699068846456832

@matts1
Copy link
Contributor

matts1 commented Dec 6, 2022

GNU make recently added support for jobservers via named pipes (previously they had to be passed around via file descriptors).

Could we have bazel create named pipes for its own jobserver, and provide some kind of mechanism to provide those pipes to actions (variable substitition?)? Someone made a workaround which literally just has a jobserver service running in the background for rules_foreign_cc's make, but it'd be nice if we could have this run with fully self-contained builds.

@larsrc-google
Copy link
Contributor

One thing you can do now is provide better estimates of the #CPUs your jobs will use. @wilwell submitted d7f0724, which allows specifying the expected amount of CPU/RAM depending on the number of inputs. And I have work in progress to use cgroups for sandboxes, which would allow more flexible limits. Neither of those are as powerful as negotiating with the processes, but I want to see a strong need for that power before complicating matters even more.

@lukokr-aarch64
Copy link

We have a very heavy compute build action that would like to use the most cores it can.

We have no option to break it down into smaller chunks.

Having the cooperative parallelism would go a long way for us as right now there is no obvious way to obtain the number of jobs that Bazel has access to within a rule context such that we can return it via resource_set.

We could use a workaround with a repository rule calling out to nproc but that would not account for Bazel --jobs=N flag.

In the short term it would be useful if the resource_set callable recieved the upper limits on the resources. Something like:

# This breaks the current API but it demonstrates what we would like to be able to do. 
def _resources(default, limit, platform):
    default["cpu"] = _jobs(max = limit["cpu"], min = 1, diff = -2)
    return default

The idea for us is on a machine with 56 cores we could reserve some cores for other smaller actions to trickle through.

Alternatively a tag to mark all actions of a certain type to be exclusive similiar to how tests can be marked with the exclusive tag.

@fmeum
Copy link
Collaborator

fmeum commented Dec 12, 2023

With resource_set, you can approximate having an action run on min(N, $(nproc)) cores for a constant N due to how scheduling works in Bazel: If a resource isn't in use (i.e., all remaining actions are waiting for the big one), an action is executed even if it requests resources in excess of the available total.

This doesn't allow any kind of $(nproc) - 2 logic though, which would be quite convenient. But I think that this would require special support in ctx.actions.args so that the action's command line can indicate the available level of parallelism. Just making the resource API more flexible wouldn't be sufficient.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P2 We'll consider working on this in future. (Assignee optional) team-Local-Exec Issues and PRs for the Execution (Local) team type: feature request
Projects
None yet
Development

No branches or pull requests