Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove ban on using string_cast with CUDA host code #1041

Merged
merged 1 commit into from
Nov 21, 2020

Conversation

lpisha
Copy link

@lpisha lpisha commented Nov 18, 2020

string_cast.hpp merely detects whether the current compiler is NVCC (originally based on if defined(__CUDACC__) in glm/simd/platform.h) and throws an error if it is. This means string_cast.hpp cannot be included in any header which might ever be used in a CUDA project.

Of course, glm::to_string can't be used in device (GPU) code. However, the current approach to stop this is both incorrect and unnecessary. __CUDACC__ will be defined in both host and device code compilation, and glm::to_string can obviously be used in host code. The correct define is __CUDA_ARCH__ (will be defined only if compiling device code). However, there's no problem if glm::to_string is defined (the header is included) while compiling device code, as long as it's not actually used in the device code. So, throwing an error if __CUDA_ARCH__ is defined would still prevent string_cast.hpp from being included in CUDA projects.

There's actually no need for any manual check to see if glm::to_string is being used in device code, because the compiler will already check for that. It returns a std::string, which itself can't be used in device code, so it's unlikely a developer would try. And if they did, there would be errors that both glm::to_string and all the needed std::string constructors, stream operators, etc. are host-only functions.

string_cast.hpp merely detects whether the current compiler is NVCC (originally based on `if defined(__CUDACC__)` in glm/simd/platform.h) and throws an error if it is. This means string_cast.hpp cannot be included in any header which might ever be used in a CUDA project.

Of course, glm::to_string can't be used in device (GPU) code. However, the current approach to stop this is both incorrect and unnecessary. __CUDACC__ will be defined in both host and device code compilation, and glm::to_string can obviously be used in host code. The correct define is __CUDA_ARCH__ (will be defined only if compiling device code). However, there's no problem if glm::to_string is defined (the header is included) while compiling device code, as long as it's not actually used in the device code. So, throwing an error if __CUDA_ARCH__ is defined would still prevent string_cast.hpp from being included in CUDA projects.

There's actually no need for any manual check to see if glm::to_string is being used in device code, because the compiler will already check for that. It returns a std::string, which itself can't be used in device code, so it's unlikely a developer would try. And if they did, there would be errors that both glm::to_string and all the needed std::string constructors, stream operators, etc. are host-only functions.
@christophe-lunarg
Copy link

Actually developers tried... Why not using CUDA_ARCH instead to warn the developers if they try?

@lpisha
Copy link
Author

lpisha commented Nov 20, 2020

Actually developers tried... Why not using CUDA_ARCH instead to warn the developers if they try?

Perhaps it wasn't clear from above. It's perfectly legal to #include <glm/gtx/string_cast.hpp> or #include <iostream> or any other non-GPU functionality in a CUDA program, both while compiling for CPU and while compiling for GPU. What's not okay is to use the standard library classes, or things dependent on them, in the device functions / kernels.

The current state throws an error if string_cast.hpp is included during either CPU or GPU compilation, regardless of whether glm::to_string is used anywhere.

Your proposal would throw a warning if string_cast.hpp is included during GPU compilation only, regardless of whether glm::to_string is used anywhere (or specifically in device functions, where it's not allowed to be). This is obviously much better, but is still unnecessary because it would throw a warning for code which is completely correct and legal.

If the question is, "why would you ever include a header you can't use?", the answer is, because you're including the header to be used in the CPU code within the same file or project. The header is still included while the GPU code is being compiled. The user could put ifdefs around the include based on __CUDA_ARCH__ to silence the warning, but this is silly.

A CUDA program using headers which define CPU-only objects is completely normal; developers having to not use those objects in GPU code is completely normal. There's no need for a special warning or any kind of special treatment of this situation in GLM at all.

@christophe-lunarg
Copy link

But using CUDA_ARCH, wouldn't it be possible to include the files anywhere, no warning, but the code it not present when compile with GPU code?

Fundamentally, I would use CUDA_ARCH like ARM or x86 instrinsics but here without implementation at all.

@lpisha
Copy link
Author

lpisha commented Nov 20, 2020

Yes, that's a possibility. However, the result of that would be, if a developer tried to use glm::to_string in device code, they would get an error like "the symbol to_string is not defined in namespace glm", rather than the more helpful "cannot call a __host__ function glm::to_string from a __device__ function your_code".

Again, there is no need for any kind of special treatment of a CPU-only function that doesn't work in GPU code in CUDA. This is completely normal and every CUDA developer is used to this.

@christophe-lunarg christophe-lunarg merged commit 4ae59db into g-truc:master Nov 21, 2020
@christophe-lunarg
Copy link

Ok, let's try then!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants