Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test updates of CCCL (thrust, cub, libcudacxx) to 2.1.0. #3516

Closed
wants to merge 7 commits into from

Conversation

bdice
Copy link
Contributor

@bdice bdice commented Apr 26, 2023

This PR tests a rapids-cmake branch with CCCL (thrust, cub, libcudacxx) updated to 2.1.0. Do not merge this PR. The changes will be merged upstream in rapids-cmake after all libraries pass CI.

@BradReesWork BradReesWork added this to the 23.08 milestone May 24, 2023
@BradReesWork BradReesWork changed the base branch from branch-23.06 to branch-23.08 May 30, 2023 12:53
@bdice bdice changed the base branch from branch-23.08 to branch-23.10 July 27, 2023 15:42
@BradReesWork BradReesWork modified the milestones: 23.08, 23.10 Aug 1, 2023
@bdice
Copy link
Contributor Author

bdice commented Sep 14, 2023

This PR is ready to pass off to a cuGraph C++ dev for completion. We are planning to ship CCCL 2.1.0 support (possibly CCCL 2.2.0) in RAPIDS 23.12. The corresponding rapids-cmake changes will be merged early in the development cycle for 23.12. We are aiming for the PRs to every RAPIDS library to be ready to merge changes needed for CCCL 2 support when 23.10 burndown begins and the 23.12 branch is created. The general readiness of RAPIDS libraries for this CCCL major version bump is being tracked here: rapidsai/rapids-cmake#399 (comment)

The most common changes that are needed are to wrap device lambdas where the return type is needed on the host with cuda::proclaim_return_type<ReturnType>([...] __device__ (...){ ... });. This is most often needed for thrust::transform calls, or when constructing transform iterators, though other Thrust/CUB algorithms also have this requirement. Algorithms like thrust::for_each, however, do not require the return type to be known by the host -- so don't assume that every __device__ lambda needs to proclaim a return type. (Most of the time, I believe the difference has to do with whether the working memory or shared memory sizes of that algorithm depend on the return type.) Please refer to rapidsai/cudf#13222 for more examples.

Be aware that some build errors might be due to other API changes in CCCL 2.1.0, so there may be additional changes necessary besides device lambda return types. Consult the changelogs below for more information:

@seunghwak seunghwak mentioned this pull request Sep 19, 2023
@BradReesWork BradReesWork changed the base branch from branch-23.10 to branch-23.12 September 26, 2023 13:52
@BradReesWork BradReesWork modified the milestones: 23.10, 23.12 Sep 26, 2023
@ChuckHastings ChuckHastings modified the milestones: 23.12, 24.02 Nov 7, 2023
@bdice bdice mentioned this pull request Dec 7, 2023
@bdice
Copy link
Contributor Author

bdice commented Dec 7, 2023

Closing in favor of #4052.

@bdice bdice closed this Dec 7, 2023
rapids-bot bot pushed a commit that referenced this pull request Dec 20, 2023
This PR updates cuGraph to CCCL 2.2.0. Do not merge until all of RAPIDS is ready to update.

Depends on #3862.

Replaces #3516.

Authors:
  - Bradley Dice (https://github.com/bdice)

Approvers:
  - Vyas Ramasubramani (https://github.com/vyasr)
  - Ray Douglass (https://github.com/raydouglass)

URL: #4052
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants