Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU-based reduce expressions sometimes seemingly hang on desktop GPUs (but not really) #25979

Open
e-kayrakli opened this issue Sep 23, 2024 · 0 comments

Comments

@e-kayrakli
Copy link
Contributor

I've stumbled into this several times in the past. It could be really frustrating and confusing, but at the same time it could also be related to my setup as I wasn't able to reproduce it anywhere byt my own workstation. I still wanted to record it in case it was an issue for someone else.

Consider the simple snippet:

const target = if here.gpus.size > 0 then here.gpus[0] else here;

on target {
  var Arr: [1..1_000_000] int;

  @gpu.assertEligible
  foreach elem in Arr do
    elem += 1;

  writeln(+ reduce Arr);
}

When I compile and run this on RTX A2000 with CUDA 12.4, it takes about 1 minute for the first run to finish. If I run it again soon thereafter, it is almost instantaneous. Whenever I run into this, I think there is a bug and start debugging after Ctrl+C'ing the execution. FWIW, --debugGpu shows that we call into CUB helpers from the host for the final round of reduction and that the CUB functions seems to take forever to finish. This could be related to NVIDIA Persistence Daemon issues we have seen in the past. That issue is documented in the GPU technote.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant