Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

supporting local model for jupyter-ai with ollama #868

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

767472021
Copy link

Add Ollama, now supporting the following local models: "gemma", "gemma2", "llama2", "llama3", "phi3", "mistral", "tinyllama", and "qwen2"!

Magic commands can also be used 【learn、 ask ....】.

test completed and work well

1
2
3
4

xiayu and others added 3 commits July 2, 2024 15:41
# Conflicts:
#	docs/source/users/index.md
#	packages/jupyter-ai-magics/jupyter_ai_magics/embedding_providers.py
@srdas srdas mentioned this pull request Jul 2, 2024
5 tasks
@srdas
Copy link
Collaborator

srdas commented Jul 2, 2024

Cross-referencing PR #646. Related to suggestions in Issue #389.

@pedrogutobjj
Copy link

how do i use this update?

@767472021
Copy link
Author

767472021 commented Jul 3, 2024

how do i use this update?

@pedrogutobjj

  1. Move to the root of the repo package
    cd $your-jupyter-ai-top

  2. Then merge this pr

  3. Installs all the dependencies and sets up the dev environment
    ./scripts/install.sh

  4. Run ollama serve local and pull test model

  5. Test

Ref: https://jupyter-ai.readthedocs.io/en/latest/contributors/index.html

Copy link
Collaborator

@srdas srdas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @767472021 for this PR. I left some comments herein. Please refer to PR #646 as well. Nice work! I have also tested the code on a few Ollama models.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much for adding this PR, which also complements PR #646.

[1] It would be nice to add more documentation on Ollama, directing the user to install it first and how, pointing to the necessary Ollama resources. Specific examples of Ollama CLI commands needed would help the user. See the note by @pedrogutobjj above for example. Hopefully, people who use Ollama with its CLI will know what to do, but the docs should reflect the requirement that the Ollama local server should be kicked off with ollama serve.

[2] Since the number of models in Ollama is very large, the drop down list can become unwieldy. Would it be possible to implement Ollama in the same way as Huggingface is implemented with a input box for the model in chat settings?
image
It would also need a similar approach for choice of Embedding Model.

@@ -728,6 +730,84 @@ async def _acall(self, *args, **kwargs) -> Coroutine[Any, Any, str]:
return await self._call_in_executor(*args, **kwargs)


class OllamaProvider(BaseProvider, Ollama):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see a similar implementation in PR #646 with a much simpler class and see if all the functions defined in your file from line 760 onwards are necessary, or being called to produce the response.

def _send_request(self, endpoint: str, data: dict) -> dict:
"""Send a POST request to the specified Ollama API endpoint."""
url = f"{self.base_url}/{endpoint}"
print("url is : ", url)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you need this print statement? If so, use self.log.info() as is the approach in much of the code base.

@@ -18,6 +18,7 @@
Union,
)

import requests
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the functions in your new class can be removed, then you will not need this line.

@srdas srdas added the enhancement New feature or request label Jul 3, 2024
Comment on lines +781 to +808
def _generate(self, prompt: str, model: str = "mistral", stop: list = None) -> str:
"""Generate text using the /generate endpoint."""
data = {
"model": model,
"prompt": prompt,
**({"stop": stop} if stop else {}),
}
response = self._send_request("api/generate", data)
return response.get("response", "")

def _chat(self, messages: list, model: str = "mistral") -> str:
"""Chat using the /chat endpoint."""
data = {
"model": model,
"messages": messages,
}
response = self._send_request("api/chat", data)
return response.get("response", "")

def _call(self, prompt: str = None, messages: list = None, **kwargs) -> str:
"""Determine which API endpoint to use based on input and call it."""
print(self.base_url)
if prompt is not None:
return self._generate(prompt=prompt, **kwargs)
elif messages is not None:
return self._chat(messages=messages, **kwargs)
else:
raise ValueError("Either 'prompt' or 'messages' must be provided.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a need for implementing the Ollama class methods? Ollama chat model provides these methods, forking the code might deviate from the official LangChain LLMs.
https://python.langchain.com/v0.1/docs/integrations/chat/ollama/

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, I noted the same above, and I have also tested removing these functions that the functionality is not impaired.

@pedrogutobjj
Copy link

How long will it take for the final implementation of ollama in jupyter there and to release the update to everyone?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants