-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vulkan Not Present or Passed Through in RetroArch Container #58
Comments
Hi, https://www.reddit.com/r/kasmweb/comments/zvee3q/nvidia_gpu_with_steam_workspace/ Here is another example that may be helpful. This was a test to get the beta version of GODOT running which required vulkan. In this case the vulkan sdk was needed so we built a custom image that based from the official vulkan images. Consider linking this in reddit to see if others have more to contribute to the convo. |
I can post to reddit, but this issue is easily recreatable with your own retroarch image if a compatible card is available. If you change the retroarch driver to use Vulkan in the retroarch configuration, it will go into an infinite loading loop and will never actually load until you change the configuration back to "gl". As I confirmed in my example above, the Nvidia driver is presented to the container already, and I have the required vulkan dependencies installed on the host, and confirmed the GT 710 is Vulkan compatible: I did find an issue with Vulkan passthrough to my containers that is now resolved, but the issue with the retroarch container still remains. Vulkan Error:
The fix was found here - I had to remove a directory file mistakenly made by driver install, and make it a file with the following contents. Changed permission to +x afterwards:
After fixing this, I can run the Nvidia Vulkan container and get the correct vulkaninfo output:
If I install vulkan-tools and run vulkaninfo inside the retroarch container, vulkaninfo gives the following error and I'm not sure why as my google-fu has reached it's limits this morning. I think it is due to potential driver issue as I see the container is using a MESA driver for video and not nvidia, but nvidia-smi is still working in the container:
I was initially thinking a display needed passed through to the container, but the vulkaninfo on the official vulkan container from Nvidia shows the same XDG_RUNTIME_DIR error at the top of it's output. I think perhaps there is another variable that needs passed to the containers on creation, but I'm not sure what else to try here. Here is another post I found with a similar issue for Vulkan, claiming a display is not present for the container, although again I don't think this is an issue: |
I am also tracking this issue on Reddit here - https://www.reddit.com/r/kasmweb/comments/12cifwe/comment/jf28x34/?context=3 I was able to get my container to recognize my video card and render using it properly by re-deploying a desktop distribution of Ubuntu 22, but selecting the Vulkan driver in Retroarch still leads to an endless boot loop with the same errors I've posted above from within the container. |
Thanks to Justin over on the subreddit, this is resolved. Adding these items to the Workspace got the Vulkan driver working. Volume Mapping:
Docker Run Config Override
|
I'm setting up a RetroArch workspace and do not have Vulkan support being passed through to my workspace container. If I select Vulkan as the driver in retroarch and restart it from the menu, it will enter an infinite loop until I change the driver back to "gl" in the local config file.
I do have the Nvidia drivers, Nvidia Container runtime and Vulkan libraries on my Ubuntu 22 host and passed through to the container. I know games are utilizing it. I can run the nvidia-smi command from within a retroarch container once spun up as a workspace, but the vulkaninfo command is not available inside the container:
The Vulkan library is passed through to the container properly from the host:
Installing the vulkan-tools is not enough to get Vulkan working:
From what I've read, the Nvidia drivers also need installed inside the container, but I'm having an issue installing because the Nvidia driver is in use via the host passthrough:
nvidia-smi command working:
I will try and build my own retroarch image off your core image, but that would be a new venture for me and I will likely fumble around with it for a while. If someone can create a new dev image for me to test I'd appreciate it!
If I do get an image built and tested I will post results.
The text was updated successfully, but these errors were encountered: