Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specialized CUDA kernels lose a reference to the cache #9604

Open
gmarkall opened this issue Jun 4, 2024 · 0 comments
Open

Specialized CUDA kernels lose a reference to the cache #9604

gmarkall opened this issue Jun 4, 2024 · 0 comments
Labels
bug - incorrect behavior Bugs: incorrect behavior caching Issue involving caching CUDA CUDA related issue/PR
Milestone

Comments

@gmarkall
Copy link
Member

gmarkall commented Jun 4, 2024

Quick notes on this based on a verbal description from @guilhermeleobas: The specialize() function doesn't create the specialized dispatcher with any reference to caching, so caching is broken for specialized dispatchers. (See line 702 of numba/cuda/dispatcher.py).

@gmarkall gmarkall added CUDA CUDA related issue/PR caching Issue involving caching bug - incorrect behavior Bugs: incorrect behavior labels Jun 4, 2024
@gmarkall gmarkall added this to the 0.61.0-rc1 milestone Jun 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug - incorrect behavior Bugs: incorrect behavior caching Issue involving caching CUDA CUDA related issue/PR
Projects
None yet
Development

No branches or pull requests

1 participant