Skip to content

Commit

Permalink
Upgrades openai to version 1, removes openai history in magics (jupyt…
Browse files Browse the repository at this point in the history
…erlab#551)

* Upgrades openai versions

* Migration per openai migrate

* WIP: Upgrades openai, merges in "new" provider

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* No longer suppress openai-chat

* Removes "new" openai-chat provider from commands.ipynb

* Suppress warning re custom exception handler

* Changes param name to prefix_messages

* Renames openai-chat-new model in tests

* Removes hard dependency from jupyter-ai on openai, updates magics TOML

* Removes "reset" option, copy edits in docs, updates sample file

* Removes append_exchange function

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
JasonWeill and pre-commit-ci[bot] committed Jan 5, 2024
1 parent 74c0b18 commit 83d14ca
Show file tree
Hide file tree
Showing 12 changed files with 30 additions and 183 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ In addition, you will need access to at least one model provider.

To use any AI model provider within this notebook, you'll need the appropriate credentials, such as API keys.

Obtain the necessary credentials (e.g., API keys) from your model provider's platform.
Obtain the necessary credentials, such as API keys, from your model provider's platform.

You can set your keys using environment variables or in a code cell in your notebook.
In a code cell, you can use the %env magic command to set the credentials as follows:
Expand Down
26 changes: 3 additions & 23 deletions docs/source/users/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ should contain the following data:
AWS Console at the URL
`https://<region>.console.aws.amazon.com/sagemaker/home?region=<region>#/endpoints`.

- **Region name**: The AWS region your SageMaker endpoint is hosted in, e.g. `us-west-2`.
- **Region name**: The AWS region your SageMaker endpoint is hosted in, such as `us-west-2`.

- **Request schema**: The JSON object the endpoint expects, with the prompt
being substituted into any value that matches the string literal `"<prompt>"`.
Expand Down Expand Up @@ -473,8 +473,8 @@ running the following code in a notebook cell or IPython shell:
This command should not produce any output.

:::{note}
If you are using remote kernels (e.g. Amazon SageMaker Studio), the above
command will throw an error. This means that need to install the magics package
If you are using remote kernels, such as in Amazon SageMaker Studio, the above
command will throw an error. You will need to install the magics package
on your remote kernel separately, even if you already have `jupyter_ai_magics`
installed in your server's environment. In a notebook, run

Expand Down Expand Up @@ -519,11 +519,6 @@ We currently support the following language model providers:
- `openai-chat`
- `sagemaker-endpoint`

:::{warning}
As of v0.8.0, only the `%%ai` *cell* magic may be used to invoke a language
model, while the `%ai` *line* magic is reserved for invoking subcommands.
:::

### Listing available models

Jupyter AI also includes multiple subcommands, which may be invoked via the
Expand Down Expand Up @@ -604,21 +599,6 @@ A function that computes the lowest common multiples of two integers, and
a function that runs 5 test cases of the lowest common multiple function
```

### Clearing the OpenAI chat history

With the `openai-chat` provider *only*, you can run a cell magic command using the `-r` or
`--reset` option to clear the chat history. After you do this, previous magic commands you've
run with the `openai-chat` provider will no longer be added as context in
requests to this provider.

Because the `%%ai` command is a cell magic, you must provide a prompt on the second line.
This prompt will not be sent to the provider. A reset command will not generate any output.

```
%%ai openai-chat:gpt-3.5-turbo -r
reset the chat history
```

### Interpolating in prompts

Using curly brace syntax, you can include variables and other Python expressions in your
Expand Down
69 changes: 5 additions & 64 deletions examples/commands.ipynb

Large diffs are not rendered by default.

1 change: 0 additions & 1 deletion packages/jupyter-ai-magics/jupyter_ai_magics/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@
BedrockChatProvider,
BedrockProvider,
ChatAnthropicProvider,
ChatOpenAINewProvider,
ChatOpenAIProvider,
CohereProvider,
GPT4AllProvider,
Expand Down
35 changes: 9 additions & 26 deletions packages/jupyter-ai-magics/jupyter_ai_magics/magics.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,20 +124,22 @@ def __init__(self, shell):
super().__init__(shell)
self.transcript_openai = []

# suppress warning when using old OpenAIChat provider
warnings.filterwarnings(
"ignore",
message="You are trying to use a chat model. This way of initializing it is "
"no longer supported. Instead, please use: "
"`from langchain.chat_models import ChatOpenAI`",
)
# suppress warning when using old Anthropic provider
warnings.filterwarnings(
"ignore",
message="This Anthropic LLM is deprecated. Please use "
"`from langchain.chat_models import ChatAnthropic` instead",
)

# suppress warning about our exception handler
warnings.filterwarnings(
"ignore",
message="IPython detected, but you already "
"have a custom exception handler installed. I'll skip installing "
"Trio's custom handler, but this means exception groups will not "
"show full tracebacks.",
)

self.providers = get_lm_providers()

# initialize a registry of custom model/chain names
Expand Down Expand Up @@ -408,17 +410,10 @@ def handle_error(self, args: ErrorArgs):
# Set CellArgs based on ErrorArgs
values = args.dict()
values["type"] = "root"
values["reset"] = False
cell_args = CellArgs(**values)

return self.run_ai_cell(cell_args, prompt)

def _append_exchange_openai(self, prompt: str, output: str):
"""Appends a conversational exchange between user and an OpenAI Chat
model to a transcript that will be included in future exchanges."""
self.transcript_openai.append({"role": "user", "content": prompt})
self.transcript_openai.append({"role": "assistant", "content": output})

def _decompose_model_id(self, model_id: str):
"""Breaks down a model ID into a two-tuple (provider_id, local_model_id). Returns (None, None) if indeterminate."""
# custom_model_registry maps keys to either a model name (a string) or an LLMChain.
Expand Down Expand Up @@ -503,11 +498,6 @@ def run_ai_cell(self, args: CellArgs, prompt: str):
+ "If you were trying to run a command, run `%ai help` to see a list of commands.",
)

# if `--reset` is specified, reset transcript and return early
if provider_id == "openai-chat" and args.reset:
self.transcript_openai = []
return

# validate presence of authn credentials
auth_strategy = self.providers[provider_id].auth_strategy
if auth_strategy:
Expand All @@ -530,8 +520,6 @@ def run_ai_cell(self, args: CellArgs, prompt: str):

# configure and instantiate provider
provider_params = {"model_id": local_model_id}
if provider_id == "openai-chat":
provider_params["prefix_messages"] = self.transcript_openai
# for SageMaker, validate that required params are specified
if provider_id == "sagemaker-endpoint":
if (
Expand Down Expand Up @@ -565,11 +553,6 @@ def run_ai_cell(self, args: CellArgs, prompt: str):
result = provider.generate([prompt])

output = result.generations[0][0].text

# if openai-chat, append exchange to transcript
if provider_id == "openai-chat":
self._append_exchange_openai(prompt, output)

md = {"jupyter_ai": {"provider_id": provider_id, "model_id": local_model_id}}

return self.display_output(output, args.format, md)
Expand Down
10 changes: 1 addition & 9 deletions packages/jupyter-ai-magics/jupyter_ai_magics/parsers.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,15 +46,14 @@ class CellArgs(BaseModel):
type: Literal["root"] = "root"
model_id: str
format: FORMAT_CHOICES_TYPE
reset: bool
model_parameters: Optional[str]
# The following parameters are required only for SageMaker models
region_name: Optional[str]
request_schema: Optional[str]
response_path: Optional[str]


# Should match CellArgs, but without "reset"
# Should match CellArgs
class ErrorArgs(BaseModel):
type: Literal["error"] = "error"
model_id: str
Expand Down Expand Up @@ -126,13 +125,6 @@ def verify_json_value(ctx, param, value):
default="markdown",
help=FORMAT_HELP,
)
@click.option(
"-r",
"--reset",
is_flag=True,
help="""Clears the conversation transcript used when interacting with an
OpenAI chat model provider. Does nothing with other providers.""",
)
@click.option(
REGION_NAME_SHORT_OPTION,
REGION_NAME_LONG_OPTION,
Expand Down
50 changes: 4 additions & 46 deletions packages/jupyter-ai-magics/jupyter_ai_magics/providers.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@
AzureChatOpenAI,
BedrockChat,
ChatAnthropic,
ChatOpenAI,
QianfanChatEndpoint,
)
from langchain.chat_models.base import BaseChatModel
Expand All @@ -44,6 +43,7 @@
from langchain.pydantic_v1 import BaseModel, Extra, root_validator
from langchain.schema import LLMResult
from langchain.utils import get_from_dict_or_env
from langchain_community.chat_models import ChatOpenAI


class EnvAuthStrategy(BaseModel):
Expand Down Expand Up @@ -534,13 +534,13 @@ def is_api_key_exc(cls, e: Exception):
"""
import openai

if isinstance(e, openai.error.AuthenticationError):
if isinstance(e, openai.AuthenticationError):
error_details = e.json_body.get("error", {})
return error_details.get("code") == "invalid_api_key"
return False


class ChatOpenAIProvider(BaseProvider, OpenAIChat):
class ChatOpenAIProvider(BaseProvider, ChatOpenAI):
id = "openai-chat"
name = "OpenAI"
models = [
Expand All @@ -561,48 +561,6 @@ class ChatOpenAIProvider(BaseProvider, OpenAIChat):
pypi_package_deps = ["openai"]
auth_strategy = EnvAuthStrategy(name="OPENAI_API_KEY")

def append_exchange(self, prompt: str, output: str):
"""Appends a conversational exchange between user and an OpenAI Chat
model to a transcript that will be included in future exchanges."""
self.prefix_messages.append({"role": "user", "content": prompt})
self.prefix_messages.append({"role": "assistant", "content": output})

@classmethod
def is_api_key_exc(cls, e: Exception):
"""
Determine if the exception is an OpenAI API key error.
"""
import openai

if isinstance(e, openai.error.AuthenticationError):
error_details = e.json_body.get("error", {})
return error_details.get("code") == "invalid_api_key"
return False


# uses the new OpenAIChat provider. temporarily living as a separate class until
# conflicts can be resolved
class ChatOpenAINewProvider(BaseProvider, ChatOpenAI):
id = "openai-chat-new"
name = "OpenAI"
models = [
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
"gpt-3.5-turbo-0301",
"gpt-3.5-turbo-0613",
"gpt-3.5-turbo-16k-0613",
"gpt-4",
"gpt-4-0314",
"gpt-4-0613",
"gpt-4-32k",
"gpt-4-32k-0314",
"gpt-4-32k-0613",
"gpt-4-1106-preview",
]
model_id_key = "model_name"
pypi_package_deps = ["openai"]
auth_strategy = EnvAuthStrategy(name="OPENAI_API_KEY")

fields = [
TextField(
key="openai_api_base", label="Base API URL (optional)", format="text"
Expand All @@ -620,7 +578,7 @@ def is_api_key_exc(cls, e: Exception):
"""
import openai

if isinstance(e, openai.error.AuthenticationError):
if isinstance(e, openai.AuthenticationError):
error_details = e.json_body.get("error", {})
return error_details.get("code") == "invalid_api_key"
return False
Expand Down
3 changes: 1 addition & 2 deletions packages/jupyter-ai-magics/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ all = [
"huggingface_hub",
"ipywidgets",
"pillow",
"openai",
"openai~=1.6.1",
"boto3",
"qianfan"
]
Expand All @@ -63,7 +63,6 @@ gpt4all = "jupyter_ai_magics:GPT4AllProvider"
huggingface_hub = "jupyter_ai_magics:HfHubProvider"
openai = "jupyter_ai_magics:OpenAIProvider"
openai-chat = "jupyter_ai_magics:ChatOpenAIProvider"
openai-chat-new = "jupyter_ai_magics:ChatOpenAINewProvider"
azure-chat-openai = "jupyter_ai_magics:AzureChatOpenAIProvider"
sagemaker-endpoint = "jupyter_ai_magics:SmEndpointProvider"
amazon-bedrock = "jupyter_ai_magics:BedrockProvider"
Expand Down
4 changes: 0 additions & 4 deletions packages/jupyter-ai/jupyter_ai/handlers.py
Original file line number Diff line number Diff line change
Expand Up @@ -297,10 +297,6 @@ def get(self):

# Step 1: gather providers
for provider in self.lm_providers.values():
# skip old legacy OpenAI chat provider used only in magics
if provider.id == "openai-chat":
continue

optionals = {}
if provider.model_id_label:
optionals["model_id_label"] = provider.model_id_label
Expand Down
8 changes: 4 additions & 4 deletions packages/jupyter-ai/jupyter_ai/tests/test_config_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ def configure_to_cohere(cm: ConfigManager):
def configure_to_openai(cm: ConfigManager):
"""Configures the ConfigManager to use OpenAI language and embedding models
with the API key set. Returns a 3-tuple of the keyword arguments used."""
LM_GID = "openai-chat-new:gpt-3.5-turbo"
LM_GID = "openai-chat:gpt-3.5-turbo"
EM_GID = "openai:text-embedding-ada-002"
API_KEYS = {"OPENAI_API_KEY": "foobar"}
LM_LID = "gpt-3.5-turbo"
Expand Down Expand Up @@ -157,7 +157,7 @@ def test_init_with_blocklists(cm: ConfigManager, common_cm_kwargs):
del cm

blocked_providers = ["openai"] # blocks EM
blocked_models = ["openai-chat-new:gpt-3.5-turbo"] # blocks LM
blocked_models = ["openai-chat:gpt-3.5-turbo"] # blocks LM
kwargs = {
**common_cm_kwargs,
"blocked_providers": blocked_providers,
Expand Down Expand Up @@ -278,14 +278,14 @@ def test_forbid_write_write_conflict(cm: ConfigManager):

# call UpdateConfig separately after DescribeConfig with `last_read` unset
# to force a write
cm.update_config(UpdateConfigRequest(model_provider_id="openai-chat-new:gpt-4"))
cm.update_config(UpdateConfigRequest(model_provider_id="openai-chat:gpt-4"))

# this update should fail, as this generates a write-write conflict (where
# the second update clobbers the first update).
with pytest.raises(WriteConflictError):
cm.update_config(
UpdateConfigRequest(
model_provider_id="openai-chat-new:gpt-4-32k", last_read=last_read
model_provider_id="openai-chat:gpt-4-32k", last_read=last_read
)
)

Expand Down
1 change: 0 additions & 1 deletion packages/jupyter-ai/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,6 @@ classifiers = [
dependencies = [
"jupyter_server>=1.6,<3",
"jupyterlab~=4.0",
"openai~=0.26",
"aiosqlite>=0.18",
"importlib_metadata>=5.2.0",
"langchain==0.0.350",
Expand Down
4 changes: 2 additions & 2 deletions yarn.lock
Original file line number Diff line number Diff line change
Expand Up @@ -15070,11 +15070,11 @@ __metadata:

"typescript@patch:typescript@^3 || ^4#~builtin<compat/typescript>":
version: 4.9.5
resolution: "typescript@patch:typescript@npm%3A4.9.5#~builtin<compat/typescript>::version=4.9.5&hash=289587"
resolution: "typescript@patch:typescript@npm%3A4.9.5#~builtin<compat/typescript>::version=4.9.5&hash=23ec76"
bin:
tsc: bin/tsc
tsserver: bin/tsserver
checksum: 1f8f3b6aaea19f0f67cba79057674ba580438a7db55057eb89cc06950483c5d632115c14077f6663ea76fd09fce3c190e6414bb98582ec80aa5a4eaf345d5b68
checksum: ab417a2f398380c90a6cf5a5f74badd17866adf57f1165617d6a551f059c3ba0a3e4da0d147b3ac5681db9ac76a303c5876394b13b3de75fdd5b1eaa06181c9d
languageName: node
linkType: hard

Expand Down

0 comments on commit 83d14ca

Please sign in to comment.