We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
https://github.com/ggerganov/llama.cpp#blas-build seems like llama.cpp can run models on GPU,will localai support that ??
The text was updated successfully, but these errors were encountered:
this should be definetly possible, and technically it's just about wiring up compilations options. this is discussed also in #69 , closing as a dup
Sorry, something went wrong.
No branches or pull requests
https://github.com/ggerganov/llama.cpp#blas-build
seems like llama.cpp can run models on GPU,will localai support that ??
The text was updated successfully, but these errors were encountered: