-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not working propperly under CPU/RAM usage - Confirmed connection #1
Comments
Late tries goes like this: Also, other error that I get: UniqueConstraintError: Collection 6d029a2e-76e8-40ad-9ad0-34b35a807bf5 already exists
|
|
no, i meant to install it as a python package in the project, without using oobabooga. you can create a python virtual enviroment and use it from there as shown in #installation to install:
to use:
llama.cpp is a library to run ggml models in cpu-ram, oobabooga allows you to use it but only with compatible ggml models, the error you are getting might be a wrong model format, the model you are trying to load i think is a sentence transformer model, not an llm, for example you can use: TheBloke/vicuna-13b-v1.3.0-GGML. |
Hi, i think we are starting to understand better the repo, you can even work without oobabooga right? becuase you can load also a llama.cpp We have already conected oobabooga to the juridia and it seems working (done in low profile PC) I guess this means that it is conected and working right? We have to test it into a better PC We will keep you informed in any case, thanks a lot for your time, appreacite it :D |
Hello there Sebax
Maybe we are wrong about the usage of this app, but, afaik we seen, should run under CPU/RAM, is that right?
I was yesterday setting up and making some test, all kind of failed, some screenshots:
Next image, shows that he read the PDF I uploaded:
Are we doing it wrong by using it under CPU/RAM? It is supposed to work only with GPU?
Thanks, kind regards!
The text was updated successfully, but these errors were encountered: