Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid pinned memory for shuffle host buffers #1909

Merged
merged 1 commit into from
Mar 11, 2021

Conversation

jlowe
Copy link
Member

@jlowe jlowe commented Mar 10, 2021

When using legacy Spark shuffle the buffers are first collected into host buffers that are later concatenated into one monolithic host buffer by GpuShuffleCoalesceExec that is sent to the GPU. The intermediate buffers do not need to be allocated with pinned memory because they are not sent directly to the device. This reduces the load on the pinned memory pool. Because the pinned memory pool allocator currently does not perform well with many allocations, see rapidsai/cudf#7553, this change shows noticeable speedup on regular queries with the default 200 partitions.

Signed-off-by: Jason Lowe <jlowe@nvidia.com>
@jlowe jlowe added the performance A performance related task/issue label Mar 10, 2021
@jlowe jlowe added this to the Mar 1 - Mar 12 milestone Mar 10, 2021
@jlowe jlowe self-assigned this Mar 10, 2021
@jlowe
Copy link
Member Author

jlowe commented Mar 10, 2021

build

@jlowe jlowe merged commit 7c2f80f into NVIDIA:branch-0.5 Mar 11, 2021
nartal1 pushed a commit to nartal1/spark-rapids that referenced this pull request Jun 9, 2021
Signed-off-by: Jason Lowe <jlowe@nvidia.com>
nartal1 pushed a commit to nartal1/spark-rapids that referenced this pull request Jun 9, 2021
Signed-off-by: Jason Lowe <jlowe@nvidia.com>
@jlowe jlowe deleted the shuffle-no-pinned branch September 10, 2021 15:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance A performance related task/issue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants