-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stuck on deployment #61
Comments
I wonder if it has anything to do with this: https://github.com/NVIDIA/nvidia-docker?tab=readme-ov-file DEPRECATION NOTICE This project has been superseded by the NVIDIA Container Toolkit. Update (Feb 5, 2024): Tried to install the above but it did not fix my specific problem. Still stuck at the nvidia-caps What is that device anyway? Is it for capture? |
Since you now have the nvidia driver toolkit; could you try running
and see if the devices "magically appear"? You can also try running Wolf without those extra devices but I fear that it'll probably fail. Btw, which distro are you using? Also, beware of #60 with that driver version. I'm sorry for all the troubles, but Nvidia is just plainly hostile on Linux.. |
Hi. Thanks for the help. No problem, used to nVidia. Here is what I get NVRM version: 545.29.06 Device Index: 0 |
Not sure if that was it, but I have the devices now :-) Both show now: But I think they really showed after a restart. There was also something about nvidia container toolkit version prior to 1.12 having issues: Thanks Will try to continue now |
Getting stuck here now: 22:03:16.577852995 INFO | Gstreamer version: 1.22.0-0 This was my docket CLI command: docker run --name wolf --network=host -e XDG_RUNTIME_DIR=/tmp/sockets -v /tmp/sockets:/tmp/sockets:rw -e NVIDIA_DRIVER_VOLUME_NAME=nvidia-driver-vol -v nvidia-driver-vol:/usr/nvidia:rw -e HOST_APPS_STATE_FOLDER=/etc/wolf -v /etc/wolf/wolf:/wolf/cfg -v /var/run/docker.sock:/var/run/docker.sock:rw --device-cgroup-rule "c 13:* rmw" --device /dev/nvidia-uvm --device /dev/nvidia-uvm-tools --device /dev/dri/ --device /dev/nvidia-caps/nvidia-cap1 --device /dev/nvidia-caps/nvidia-cap2 --device /dev/nvidiactl --device /dev/nvidia0 --device /dev/nvidia-modeset --device /dev/uinput -v /dev/shm:/dev/shm:rw -v /dev/input:/dev/input:rw -v /run/udev:/run/udev:rw ghcr.io/games-on-whales/wolf:stable I did not change anything from what was given on the Wolf website |
It's not stuck, just waiting for a Moonlight client to connect! As for the Pulse warning, it's fixed in the upcoming release; as a quick workaround, you can manually clean the |
Ok, Still stuck. Moonlight freezes First, I could not run as daemon, so that I can see the pin link to use. Is there a way to see it without running it in terminal? Second, moonlight worked and gave me options of running some apps, then it froze with retroarch Here are the terminal messages: [2024-02-07 21:10:32] [ /etc/cont-init.d/10-setup_user.sh: executing... ] |
And retroarch opens, but the graphics are too bad to be able to use So it seems the problem is still there
Shouldn't it be isolated from being able to do anything but stream from the host within the container, multiple instances?
so that it finds the /dev/nvidia-cap1 and /dev/nvidia-cap2 devices again Is there a way to make this more permanent? Thanks |
Hi,
I was trying to deploy Wolf, and all seems to be fine, until I reach the docker cli or compose instructions.
I get the following error and cannot continue:
docker: Error response from daemon: error gathering device information while adding custom device "/dev/nvidia-caps/nvidia-cap1": no such file or directory.
ERRO[0000] error waiting for container: context canceled
I have an RTX nVidia Quadro GPU with driver version: 545.29.06
And the devices nvidia-caps do not exist on my system.
Running on Ubuntu 22.04
Kernel: Linux 5.15.0-92-lowlatency x86_64
Thanks
The text was updated successfully, but these errors were encountered: