Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use dask-cudf to utilize multiple GPUs #116

Closed
jangorecki opened this issue Nov 14, 2019 · 8 comments · Fixed by #219
Closed

use dask-cudf to utilize multiple GPUs #116

jangorecki opened this issue Nov 14, 2019 · 8 comments · Fixed by #219

Comments

@jangorecki
Copy link
Contributor

cudf uses only single GPU, thus it would be useful to employ dask-cudf rather than just cudf.
https://blog.dask.org/2019/01/29/cudf-joins

@jangorecki
Copy link
Contributor Author

this can unfortunately cause other issues, for example rapidsai/cudf#3363 (comment)

@jangorecki
Copy link
Contributor Author

As pointed out by @datametrician we should be also able to use off-vmemory data storage with dask-cudf, then it even make sense to use dask-cudf for a single GPU.

@jangorecki
Copy link
Contributor Author

waiting for rapidsai/cudf#2288

@jangorecki
Copy link
Contributor Author

Dask seems to be a not mandatory for spilling to main memory. Due to poorly documented setup of dask-cudf this part will be solved separately: #129

@datametrician
Copy link

Without Dask, you still only use 1 GPU instead of both of them.

@jangorecki
Copy link
Contributor Author

@datametrician yes, I am aware of it, so the plan is to move to dask-cudf, so this issue stays open.

@jangorecki
Copy link
Contributor Author

using dask-cudf will additionally allows to attempt 1e9 data size by using spil to disk memory feature, as explained in rapidsai/cudf#3740 (comment)

@jangorecki
Copy link
Contributor Author

rapidsai/cudf#2288 has been finally resolved and it looks we can proceed to using dask_cudf to utilize both GPUs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants