diff --git a/README.md b/README.md
index 5943bb23e..f390aa280 100644
--- a/README.md
+++ b/README.md
@@ -7,10 +7,10 @@ and powerful way to explore generative AI models in notebooks and improve your p
in JupyterLab and the Jupyter Notebook. More specifically, Jupyter AI offers:
* An `%%ai` magic that turns the Jupyter notebook into a reproducible generative AI playground.
- This works anywhere the IPython kernel runs (JupyterLab, Jupyter Notebook, Google Colab, VSCode, etc.).
+ This works anywhere the IPython kernel runs (JupyterLab, Jupyter Notebook, Google Colab, Kaggle, VSCode, etc.).
* A native chat UI in JupyterLab that enables you to work with generative AI as a conversational assistant.
* Support for a wide range of generative model providers, including AI21, Anthropic, AWS, Cohere,
- Hugging Face, and OpenAI.
+ Hugging Face, NVIDIA, and OpenAI.
* Local model support through GPT4All, enabling use of generative AI models on consumer grade machines
with ease and privacy.
@@ -54,20 +54,21 @@ for details on installing and using Jupyter AI.
If you want to install both the `%%ai` magic and the JupyterLab extension, you can run:
- $ pip install jupyter_ai
+ $ pip install jupyter-ai
If you are not using JupyterLab and you only want to install the Jupyter AI `%%ai` magic, you can run:
- $ pip install jupyter_ai_magics
+ $ pip install jupyter-ai-magics
### With conda
As an alternative to using `pip`, you can install `jupyter-ai` using
[Conda](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)
-from the `conda-forge` channel:
+from the `conda-forge` channel, using one of the following two commands:
- $ conda install -c conda-forge jupyter_ai
+ $ conda install -c conda-force jupyter-ai # or,
+ $ conda install conda-forge::jupyter-ai
## The `%%ai` magic command
diff --git a/docs/source/index.md b/docs/source/index.md
index 28d094e93..8d70538e3 100644
--- a/docs/source/index.md
+++ b/docs/source/index.md
@@ -8,7 +8,7 @@ in JupyterLab and the Jupyter Notebook. More specifically, Jupyter AI offers:
This works anywhere the IPython kernel runs (JupyterLab, Jupyter Notebook, Google Colab, VSCode, etc.).
* A native chat UI in JupyterLab that enables you to work with generative AI as a conversational assistant.
* Support for a wide range of generative model providers and models
- (AI21, Anthropic, Cohere, Hugging Face, OpenAI, SageMaker, etc.).
+ (AI21, Anthropic, Cohere, Hugging Face, OpenAI, SageMaker, NVIDIA, etc.).
[AI Foundation Models](https://catalog.ngc.nvidia.com/ai-foundation-models), and select a model with an API endpoint. Click "API" on the model's detail page, and click "Generate Key". Save this key, and set it as the environment variable `NVIDIA_API_KEY` to access any of the model endpoints.
+
SageMaker endpoint names are created when you deploy a model. For more information, see
["Create your endpoint and deploy your model"](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html)
in the SageMaker documentation.
@@ -515,6 +527,7 @@ We currently support the following language model providers:
- `bedrock-chat`
- `cohere`
- `huggingface_hub`
+- `nvidia-chat`
- `openai`
- `openai-chat`
- `sagemaker-endpoint`
@@ -765,6 +778,28 @@ The `--response-path` option is a [JSONPath](https://goessner.net/articles/JsonP
You can specify an allowlist, to only allow only a certain list of providers, or
a blocklist, to block some providers.
+### Configuring default models and API keys
+
+This configuration allows for setting a default language and embedding models, and their corresponding API keys.
+These values are offered as a starting point for users, so they don't have to select the models and API keys, however,
+the selections they make in the settings panel will take precedence over these values.
+
+Specify default language model
+```bash
+jupyter lab --AiExtension.default_language_model=bedrock-chat:anthropic.claude-v2
+```
+
+Specify default embedding model
+```bash
+jupyter lab --AiExtension.default_embeddings_model=bedrock:amazon.titan-embed-text-v1
+```
+
+Specify default API keys
+```bash
+jupyter lab --AiExtension.default_api_keys={'OPENAI_API_KEY': 'sk-abcd'}
+```
+
+
### Blocklisting providers
This configuration allows for blocking specific providers in the settings panel.
diff --git a/examples/commands.ipynb b/examples/commands.ipynb
index 6990df577..9d4d12b2c 100644
--- a/examples/commands.ipynb
+++ b/examples/commands.ipynb
@@ -136,15 +136,18 @@
"text/markdown": [
"| Provider | Environment variable | Set? | Models |\n",
"|----------|----------------------|------|--------|\n",
- "| `ai21` | `AI21_API_KEY` | ✅ | `ai21:j1-large`, `ai21:j1-grande`, `ai21:j1-jumbo`, `ai21:j1-grande-instruct`, `ai21:j2-large`, `ai21:j2-grande`, `ai21:j2-jumbo`, `ai21:j2-grande-instruct`, `ai21:j2-jumbo-instruct` |\n",
- "| `bedrock` | Not applicable. | N/A | `bedrock:amazon.titan-tg1-large`, `bedrock:anthropic.claude-v1`, `bedrock:anthropic.claude-instant-v1`, `bedrock:anthropic.claude-v2`, `bedrock:ai21.j2-jumbo-instruct`, `bedrock:ai21.j2-grande-instruct` |\n",
- "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-2`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0` |\n",
+ "| `ai21` | `AI21_API_KEY` | ✅ |
- `ai21:j1-large`
- `ai21:j1-grande`
- `ai21:j1-jumbo`
- `ai21:j1-grande-instruct`
- `ai21:j2-large`
- `ai21:j2-grande`
- `ai21:j2-jumbo`
- `ai21:j2-grande-instruct`
- `ai21:j2-jumbo-instruct`
|\n",
+ "| `bedrock` | Not applicable. | N/A | - `bedrock:amazon.titan-text-express-v1`
- `bedrock:ai21.j2-ultra-v1`
- `bedrock:ai21.j2-mid-v1`
- `bedrock:cohere.command-light-text-v14`
- `bedrock:cohere.command-text-v14`
- `bedrock:meta.llama2-13b-chat-v1`
- `bedrock:meta.llama2-70b-chat-v1`
|\n",
+ "| `bedrock-chat` | Not applicable. | N/A | - `bedrock-chat:anthropic.claude-v1`
- `bedrock-chat:anthropic.claude-v2`
- `bedrock-chat:anthropic.claude-v2:1`
- `bedrock-chat:anthropic.claude-instant-v1`
|\n",
+ "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | - `anthropic:claude-v1`
- `anthropic:claude-v1.0`
- `anthropic:claude-v1.2`
- `anthropic:claude-2`
- `anthropic:claude-2.0`
- `anthropic:claude-instant-v1`
- `anthropic:claude-instant-v1.0`
- `anthropic:claude-instant-v1.2`
|\n",
+ "| `anthropic-chat` | `ANTHROPIC_API_KEY` | ✅ | - `anthropic-chat:claude-v1`
- `anthropic-chat:claude-v1.0`
- `anthropic-chat:claude-v1.2`
- `anthropic-chat:claude-2`
- `anthropic-chat:claude-2.0`
- `anthropic-chat:claude-instant-v1`
- `anthropic-chat:claude-instant-v1.0`
- `anthropic-chat:claude-instant-v1.2`
|\n",
"| `azure-chat-openai` | `OPENAI_API_KEY` | ✅ | This provider does not define a list of models. |\n",
- "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:medium`, `cohere:xlarge` |\n",
- "| `gpt4all` | Not applicable. | N/A | `gpt4all:ggml-gpt4all-j-v1.2-jazzy`, `gpt4all:ggml-gpt4all-j-v1.3-groovy`, `gpt4all:ggml-gpt4all-l13b-snoozy` |\n",
+ "| `cohere` | `COHERE_API_KEY` | ✅ | - `cohere:command`
- `cohere:command-nightly`
- `cohere:command-light`
- `cohere:command-light-nightly`
|\n",
+ "| `gpt4all` | Not applicable. | N/A | - `gpt4all:ggml-gpt4all-j-v1.2-jazzy`
- `gpt4all:ggml-gpt4all-j-v1.3-groovy`
- `gpt4all:ggml-gpt4all-l13b-snoozy`
- `gpt4all:mistral-7b-openorca.Q4_0`
- `gpt4all:mistral-7b-instruct-v0.1.Q4_0`
- `gpt4all:gpt4all-falcon-q4_0`
- `gpt4all:wizardlm-13b-v1.2.Q4_0`
- `gpt4all:nous-hermes-llama2-13b.Q4_0`
- `gpt4all:gpt4all-13b-snoozy-q4_0`
- `gpt4all:mpt-7b-chat-merges-q4_0`
- `gpt4all:orca-mini-3b-gguf2-q4_0`
- `gpt4all:starcoder-q4_0`
- `gpt4all:rift-coder-v0-7b-q4_0`
- `gpt4all:em_german_mistral_v01.Q4_0`
|\n",
"| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
- "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:text-davinci-003`, `openai:text-davinci-002`, `openai:text-curie-001`, `openai:text-babbage-001`, `openai:text-ada-001`, `openai:davinci`, `openai:curie`, `openai:babbage`, `openai:ada` |\n",
- "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-16k`, `openai-chat:gpt-3.5-turbo-0301`, `openai-chat:gpt-3.5-turbo-0613`, `openai-chat:gpt-3.5-turbo-16k-0613`, `openai-chat:gpt-4`, `openai-chat:gpt-4-0314`, `openai-chat:gpt-4-0613`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0314`, `openai-chat:gpt-4-32k-0613` |\n",
+ "| `openai` | `OPENAI_API_KEY` | ✅ | - `openai:babbage-002`
- `openai:davinci-002`
- `openai:gpt-3.5-turbo-instruct`
|\n",
+ "| `openai-chat` | `OPENAI_API_KEY` | ✅ | - `openai-chat:gpt-3.5-turbo`
- `openai-chat:gpt-3.5-turbo-0301`
- `openai-chat:gpt-3.5-turbo-0613`
- `openai-chat:gpt-3.5-turbo-1106`
- `openai-chat:gpt-3.5-turbo-16k`
- `openai-chat:gpt-3.5-turbo-16k-0613`
- `openai-chat:gpt-4`
- `openai-chat:gpt-4-0613`
- `openai-chat:gpt-4-32k`
- `openai-chat:gpt-4-32k-0613`
- `openai-chat:gpt-4-1106-preview`
|\n",
+ "| `qianfan` | `QIANFAN_AK`, `QIANFAN_SK` | ❌ | - `qianfan:ERNIE-Bot`
- `qianfan:ERNIE-Bot-4`
|\n",
"| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
"\n",
"Aliases and custom commands:\n",
@@ -152,13 +155,16 @@
"| Name | Target |\n",
"|------|--------|\n",
"| `gpt2` | `huggingface_hub:gpt2` |\n",
- "| `gpt3` | `openai:text-davinci-003` |\n",
+ "| `gpt3` | `openai:davinci-002` |\n",
"| `chatgpt` | `openai-chat:gpt-3.5-turbo` |\n",
- "| `gpt4` | `openai-chat:gpt-4` |\n"
+ "| `gpt4` | `openai-chat:gpt-4` |\n",
+ "| `ernie-bot` | `qianfan:ERNIE-Bot` |\n",
+ "| `ernie-bot-4` | `qianfan:ERNIE-Bot-4` |\n",
+ "| `titan` | `bedrock:amazon.titan-tg1-large` |\n"
],
"text/plain": [
"ai21\n",
- "Requires environment variable AI21_API_KEY (set)\n",
+ "Requires environment variable: AI21_API_KEY (set)\n",
"* ai21:j1-large\n",
"* ai21:j1-grande\n",
"* ai21:j1-jumbo\n",
@@ -170,65 +176,97 @@
"* ai21:j2-jumbo-instruct\n",
"\n",
"bedrock\n",
- "* bedrock:amazon.titan-tg1-large\n",
- "* bedrock:anthropic.claude-v1\n",
- "* bedrock:anthropic.claude-instant-v1\n",
- "* bedrock:anthropic.claude-v2\n",
- "* bedrock:ai21.j2-jumbo-instruct\n",
- "* bedrock:ai21.j2-grande-instruct\n",
+ "* bedrock:amazon.titan-text-express-v1\n",
+ "* bedrock:ai21.j2-ultra-v1\n",
+ "* bedrock:ai21.j2-mid-v1\n",
+ "* bedrock:cohere.command-light-text-v14\n",
+ "* bedrock:cohere.command-text-v14\n",
+ "* bedrock:meta.llama2-13b-chat-v1\n",
+ "* bedrock:meta.llama2-70b-chat-v1\n",
+ "\n",
+ "bedrock-chat\n",
+ "* bedrock-chat:anthropic.claude-v1\n",
+ "* bedrock-chat:anthropic.claude-v2\n",
+ "* bedrock-chat:anthropic.claude-v2:1\n",
+ "* bedrock-chat:anthropic.claude-instant-v1\n",
"\n",
"anthropic\n",
- "Requires environment variable ANTHROPIC_API_KEY (set)\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
"* anthropic:claude-v1\n",
"* anthropic:claude-v1.0\n",
"* anthropic:claude-v1.2\n",
"* anthropic:claude-2\n",
+ "* anthropic:claude-2.0\n",
"* anthropic:claude-instant-v1\n",
"* anthropic:claude-instant-v1.0\n",
+ "* anthropic:claude-instant-v1.2\n",
+ "\n",
+ "anthropic-chat\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
+ "* anthropic-chat:claude-v1\n",
+ "* anthropic-chat:claude-v1.0\n",
+ "* anthropic-chat:claude-v1.2\n",
+ "* anthropic-chat:claude-2\n",
+ "* anthropic-chat:claude-2.0\n",
+ "* anthropic-chat:claude-instant-v1\n",
+ "* anthropic-chat:claude-instant-v1.0\n",
+ "* anthropic-chat:claude-instant-v1.2\n",
"\n",
"azure-chat-openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* This provider does not define a list of models.\n",
"\n",
"cohere\n",
- "Requires environment variable COHERE_API_KEY (set)\n",
- "* cohere:medium\n",
- "* cohere:xlarge\n",
+ "Requires environment variable: COHERE_API_KEY (set)\n",
+ "* cohere:command\n",
+ "* cohere:command-nightly\n",
+ "* cohere:command-light\n",
+ "* cohere:command-light-nightly\n",
"\n",
"gpt4all\n",
"* gpt4all:ggml-gpt4all-j-v1.2-jazzy\n",
"* gpt4all:ggml-gpt4all-j-v1.3-groovy\n",
"* gpt4all:ggml-gpt4all-l13b-snoozy\n",
+ "* gpt4all:mistral-7b-openorca.Q4_0\n",
+ "* gpt4all:mistral-7b-instruct-v0.1.Q4_0\n",
+ "* gpt4all:gpt4all-falcon-q4_0\n",
+ "* gpt4all:wizardlm-13b-v1.2.Q4_0\n",
+ "* gpt4all:nous-hermes-llama2-13b.Q4_0\n",
+ "* gpt4all:gpt4all-13b-snoozy-q4_0\n",
+ "* gpt4all:mpt-7b-chat-merges-q4_0\n",
+ "* gpt4all:orca-mini-3b-gguf2-q4_0\n",
+ "* gpt4all:starcoder-q4_0\n",
+ "* gpt4all:rift-coder-v0-7b-q4_0\n",
+ "* gpt4all:em_german_mistral_v01.Q4_0\n",
"\n",
"huggingface_hub\n",
- "Requires environment variable HUGGINGFACEHUB_API_TOKEN (set)\n",
+ "Requires environment variable: HUGGINGFACEHUB_API_TOKEN (set)\n",
"* See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
"\n",
"openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai:text-davinci-003\n",
- "* openai:text-davinci-002\n",
- "* openai:text-curie-001\n",
- "* openai:text-babbage-001\n",
- "* openai:text-ada-001\n",
- "* openai:davinci\n",
- "* openai:curie\n",
- "* openai:babbage\n",
- "* openai:ada\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* openai:babbage-002\n",
+ "* openai:davinci-002\n",
+ "* openai:gpt-3.5-turbo-instruct\n",
"\n",
"openai-chat\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* openai-chat:gpt-3.5-turbo\n",
- "* openai-chat:gpt-3.5-turbo-16k\n",
"* openai-chat:gpt-3.5-turbo-0301\n",
"* openai-chat:gpt-3.5-turbo-0613\n",
+ "* openai-chat:gpt-3.5-turbo-1106\n",
+ "* openai-chat:gpt-3.5-turbo-16k\n",
"* openai-chat:gpt-3.5-turbo-16k-0613\n",
"* openai-chat:gpt-4\n",
- "* openai-chat:gpt-4-0314\n",
"* openai-chat:gpt-4-0613\n",
"* openai-chat:gpt-4-32k\n",
- "* openai-chat:gpt-4-32k-0314\n",
"* openai-chat:gpt-4-32k-0613\n",
+ "* openai-chat:gpt-4-1106-preview\n",
+ "\n",
+ "qianfan\n",
+ "Requires environment variables: QIANFAN_AK (not set), QIANFAN_SK (not set)\n",
+ "* qianfan:ERNIE-Bot\n",
+ "* qianfan:ERNIE-Bot-4\n",
"\n",
"sagemaker-endpoint\n",
"* Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
@@ -236,9 +274,12 @@
"\n",
"Aliases and custom commands:\n",
"gpt2 - huggingface_hub:gpt2\n",
- "gpt3 - openai:text-davinci-003\n",
+ "gpt3 - openai:davinci-002\n",
"chatgpt - openai-chat:gpt-3.5-turbo\n",
- "gpt4 - openai-chat:gpt-4\n"
+ "gpt4 - openai-chat:gpt-4\n",
+ "ernie-bot - qianfan:ERNIE-Bot\n",
+ "ernie-bot-4 - qianfan:ERNIE-Bot-4\n",
+ "titan - bedrock:amazon.titan-tg1-large\n"
]
},
"execution_count": 4,
@@ -261,20 +302,14 @@
"text/markdown": [
"| Provider | Environment variable | Set? | Models |\n",
"|----------|----------------------|------|--------|\n",
- "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:text-davinci-003`, `openai:text-davinci-002`, `openai:text-curie-001`, `openai:text-babbage-001`, `openai:text-ada-001`, `openai:davinci`, `openai:curie`, `openai:babbage`, `openai:ada` |\n"
+ "| `openai` | `OPENAI_API_KEY` | ✅ | - `openai:babbage-002`
- `openai:davinci-002`
- `openai:gpt-3.5-turbo-instruct`
|\n"
],
"text/plain": [
"openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai:text-davinci-003\n",
- "* openai:text-davinci-002\n",
- "* openai:text-curie-001\n",
- "* openai:text-babbage-001\n",
- "* openai:text-ada-001\n",
- "* openai:davinci\n",
- "* openai:curie\n",
- "* openai:babbage\n",
- "* openai:ada\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* openai:babbage-002\n",
+ "* openai:davinci-002\n",
+ "* openai:gpt-3.5-turbo-instruct\n",
"\n"
]
},
@@ -334,15 +369,18 @@
"text/markdown": [
"| Provider | Environment variable | Set? | Models |\n",
"|----------|----------------------|------|--------|\n",
- "| `ai21` | `AI21_API_KEY` | ✅ | `ai21:j1-large`, `ai21:j1-grande`, `ai21:j1-jumbo`, `ai21:j1-grande-instruct`, `ai21:j2-large`, `ai21:j2-grande`, `ai21:j2-jumbo`, `ai21:j2-grande-instruct`, `ai21:j2-jumbo-instruct` |\n",
- "| `bedrock` | Not applicable. | N/A | `bedrock:amazon.titan-tg1-large`, `bedrock:anthropic.claude-v1`, `bedrock:anthropic.claude-instant-v1`, `bedrock:anthropic.claude-v2`, `bedrock:ai21.j2-jumbo-instruct`, `bedrock:ai21.j2-grande-instruct` |\n",
- "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-2`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0` |\n",
+ "| `ai21` | `AI21_API_KEY` | ✅ | - `ai21:j1-large`
- `ai21:j1-grande`
- `ai21:j1-jumbo`
- `ai21:j1-grande-instruct`
- `ai21:j2-large`
- `ai21:j2-grande`
- `ai21:j2-jumbo`
- `ai21:j2-grande-instruct`
- `ai21:j2-jumbo-instruct`
|\n",
+ "| `bedrock` | Not applicable. | N/A | - `bedrock:amazon.titan-text-express-v1`
- `bedrock:ai21.j2-ultra-v1`
- `bedrock:ai21.j2-mid-v1`
- `bedrock:cohere.command-light-text-v14`
- `bedrock:cohere.command-text-v14`
- `bedrock:meta.llama2-13b-chat-v1`
- `bedrock:meta.llama2-70b-chat-v1`
|\n",
+ "| `bedrock-chat` | Not applicable. | N/A | - `bedrock-chat:anthropic.claude-v1`
- `bedrock-chat:anthropic.claude-v2`
- `bedrock-chat:anthropic.claude-v2:1`
- `bedrock-chat:anthropic.claude-instant-v1`
|\n",
+ "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | - `anthropic:claude-v1`
- `anthropic:claude-v1.0`
- `anthropic:claude-v1.2`
- `anthropic:claude-2`
- `anthropic:claude-2.0`
- `anthropic:claude-instant-v1`
- `anthropic:claude-instant-v1.0`
- `anthropic:claude-instant-v1.2`
|\n",
+ "| `anthropic-chat` | `ANTHROPIC_API_KEY` | ✅ | - `anthropic-chat:claude-v1`
- `anthropic-chat:claude-v1.0`
- `anthropic-chat:claude-v1.2`
- `anthropic-chat:claude-2`
- `anthropic-chat:claude-2.0`
- `anthropic-chat:claude-instant-v1`
- `anthropic-chat:claude-instant-v1.0`
- `anthropic-chat:claude-instant-v1.2`
|\n",
"| `azure-chat-openai` | `OPENAI_API_KEY` | ✅ | This provider does not define a list of models. |\n",
- "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:medium`, `cohere:xlarge` |\n",
- "| `gpt4all` | Not applicable. | N/A | `gpt4all:ggml-gpt4all-j-v1.2-jazzy`, `gpt4all:ggml-gpt4all-j-v1.3-groovy`, `gpt4all:ggml-gpt4all-l13b-snoozy` |\n",
+ "| `cohere` | `COHERE_API_KEY` | ✅ | - `cohere:command`
- `cohere:command-nightly`
- `cohere:command-light`
- `cohere:command-light-nightly`
|\n",
+ "| `gpt4all` | Not applicable. | N/A | - `gpt4all:ggml-gpt4all-j-v1.2-jazzy`
- `gpt4all:ggml-gpt4all-j-v1.3-groovy`
- `gpt4all:ggml-gpt4all-l13b-snoozy`
- `gpt4all:mistral-7b-openorca.Q4_0`
- `gpt4all:mistral-7b-instruct-v0.1.Q4_0`
- `gpt4all:gpt4all-falcon-q4_0`
- `gpt4all:wizardlm-13b-v1.2.Q4_0`
- `gpt4all:nous-hermes-llama2-13b.Q4_0`
- `gpt4all:gpt4all-13b-snoozy-q4_0`
- `gpt4all:mpt-7b-chat-merges-q4_0`
- `gpt4all:orca-mini-3b-gguf2-q4_0`
- `gpt4all:starcoder-q4_0`
- `gpt4all:rift-coder-v0-7b-q4_0`
- `gpt4all:em_german_mistral_v01.Q4_0`
|\n",
"| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
- "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:text-davinci-003`, `openai:text-davinci-002`, `openai:text-curie-001`, `openai:text-babbage-001`, `openai:text-ada-001`, `openai:davinci`, `openai:curie`, `openai:babbage`, `openai:ada` |\n",
- "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-16k`, `openai-chat:gpt-3.5-turbo-0301`, `openai-chat:gpt-3.5-turbo-0613`, `openai-chat:gpt-3.5-turbo-16k-0613`, `openai-chat:gpt-4`, `openai-chat:gpt-4-0314`, `openai-chat:gpt-4-0613`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0314`, `openai-chat:gpt-4-32k-0613` |\n",
+ "| `openai` | `OPENAI_API_KEY` | ✅ | - `openai:babbage-002`
- `openai:davinci-002`
- `openai:gpt-3.5-turbo-instruct`
|\n",
+ "| `openai-chat` | `OPENAI_API_KEY` | ✅ | - `openai-chat:gpt-3.5-turbo`
- `openai-chat:gpt-3.5-turbo-0301`
- `openai-chat:gpt-3.5-turbo-0613`
- `openai-chat:gpt-3.5-turbo-1106`
- `openai-chat:gpt-3.5-turbo-16k`
- `openai-chat:gpt-3.5-turbo-16k-0613`
- `openai-chat:gpt-4`
- `openai-chat:gpt-4-0613`
- `openai-chat:gpt-4-32k`
- `openai-chat:gpt-4-32k-0613`
- `openai-chat:gpt-4-1106-preview`
|\n",
+ "| `qianfan` | `QIANFAN_AK`, `QIANFAN_SK` | ❌ | - `qianfan:ERNIE-Bot`
- `qianfan:ERNIE-Bot-4`
|\n",
"| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
"\n",
"Aliases and custom commands:\n",
@@ -350,14 +388,17 @@
"| Name | Target |\n",
"|------|--------|\n",
"| `gpt2` | `huggingface_hub:gpt2` |\n",
- "| `gpt3` | `openai:text-davinci-003` |\n",
+ "| `gpt3` | `openai:davinci-002` |\n",
"| `chatgpt` | `openai-chat:gpt-3.5-turbo` |\n",
"| `gpt4` | `openai-chat:gpt-4` |\n",
+ "| `ernie-bot` | `qianfan:ERNIE-Bot` |\n",
+ "| `ernie-bot-4` | `qianfan:ERNIE-Bot-4` |\n",
+ "| `titan` | `bedrock:amazon.titan-tg1-large` |\n",
"| `mychat` | `openai-chat:gpt-4` |\n"
],
"text/plain": [
"ai21\n",
- "Requires environment variable AI21_API_KEY (set)\n",
+ "Requires environment variable: AI21_API_KEY (set)\n",
"* ai21:j1-large\n",
"* ai21:j1-grande\n",
"* ai21:j1-jumbo\n",
@@ -369,65 +410,97 @@
"* ai21:j2-jumbo-instruct\n",
"\n",
"bedrock\n",
- "* bedrock:amazon.titan-tg1-large\n",
- "* bedrock:anthropic.claude-v1\n",
- "* bedrock:anthropic.claude-instant-v1\n",
- "* bedrock:anthropic.claude-v2\n",
- "* bedrock:ai21.j2-jumbo-instruct\n",
- "* bedrock:ai21.j2-grande-instruct\n",
+ "* bedrock:amazon.titan-text-express-v1\n",
+ "* bedrock:ai21.j2-ultra-v1\n",
+ "* bedrock:ai21.j2-mid-v1\n",
+ "* bedrock:cohere.command-light-text-v14\n",
+ "* bedrock:cohere.command-text-v14\n",
+ "* bedrock:meta.llama2-13b-chat-v1\n",
+ "* bedrock:meta.llama2-70b-chat-v1\n",
+ "\n",
+ "bedrock-chat\n",
+ "* bedrock-chat:anthropic.claude-v1\n",
+ "* bedrock-chat:anthropic.claude-v2\n",
+ "* bedrock-chat:anthropic.claude-v2:1\n",
+ "* bedrock-chat:anthropic.claude-instant-v1\n",
"\n",
"anthropic\n",
- "Requires environment variable ANTHROPIC_API_KEY (set)\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
"* anthropic:claude-v1\n",
"* anthropic:claude-v1.0\n",
"* anthropic:claude-v1.2\n",
"* anthropic:claude-2\n",
+ "* anthropic:claude-2.0\n",
"* anthropic:claude-instant-v1\n",
"* anthropic:claude-instant-v1.0\n",
+ "* anthropic:claude-instant-v1.2\n",
+ "\n",
+ "anthropic-chat\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
+ "* anthropic-chat:claude-v1\n",
+ "* anthropic-chat:claude-v1.0\n",
+ "* anthropic-chat:claude-v1.2\n",
+ "* anthropic-chat:claude-2\n",
+ "* anthropic-chat:claude-2.0\n",
+ "* anthropic-chat:claude-instant-v1\n",
+ "* anthropic-chat:claude-instant-v1.0\n",
+ "* anthropic-chat:claude-instant-v1.2\n",
"\n",
"azure-chat-openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* This provider does not define a list of models.\n",
"\n",
"cohere\n",
- "Requires environment variable COHERE_API_KEY (set)\n",
- "* cohere:medium\n",
- "* cohere:xlarge\n",
+ "Requires environment variable: COHERE_API_KEY (set)\n",
+ "* cohere:command\n",
+ "* cohere:command-nightly\n",
+ "* cohere:command-light\n",
+ "* cohere:command-light-nightly\n",
"\n",
"gpt4all\n",
"* gpt4all:ggml-gpt4all-j-v1.2-jazzy\n",
"* gpt4all:ggml-gpt4all-j-v1.3-groovy\n",
"* gpt4all:ggml-gpt4all-l13b-snoozy\n",
+ "* gpt4all:mistral-7b-openorca.Q4_0\n",
+ "* gpt4all:mistral-7b-instruct-v0.1.Q4_0\n",
+ "* gpt4all:gpt4all-falcon-q4_0\n",
+ "* gpt4all:wizardlm-13b-v1.2.Q4_0\n",
+ "* gpt4all:nous-hermes-llama2-13b.Q4_0\n",
+ "* gpt4all:gpt4all-13b-snoozy-q4_0\n",
+ "* gpt4all:mpt-7b-chat-merges-q4_0\n",
+ "* gpt4all:orca-mini-3b-gguf2-q4_0\n",
+ "* gpt4all:starcoder-q4_0\n",
+ "* gpt4all:rift-coder-v0-7b-q4_0\n",
+ "* gpt4all:em_german_mistral_v01.Q4_0\n",
"\n",
"huggingface_hub\n",
- "Requires environment variable HUGGINGFACEHUB_API_TOKEN (set)\n",
+ "Requires environment variable: HUGGINGFACEHUB_API_TOKEN (set)\n",
"* See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
"\n",
"openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai:text-davinci-003\n",
- "* openai:text-davinci-002\n",
- "* openai:text-curie-001\n",
- "* openai:text-babbage-001\n",
- "* openai:text-ada-001\n",
- "* openai:davinci\n",
- "* openai:curie\n",
- "* openai:babbage\n",
- "* openai:ada\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* openai:babbage-002\n",
+ "* openai:davinci-002\n",
+ "* openai:gpt-3.5-turbo-instruct\n",
"\n",
"openai-chat\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* openai-chat:gpt-3.5-turbo\n",
- "* openai-chat:gpt-3.5-turbo-16k\n",
"* openai-chat:gpt-3.5-turbo-0301\n",
"* openai-chat:gpt-3.5-turbo-0613\n",
+ "* openai-chat:gpt-3.5-turbo-1106\n",
+ "* openai-chat:gpt-3.5-turbo-16k\n",
"* openai-chat:gpt-3.5-turbo-16k-0613\n",
"* openai-chat:gpt-4\n",
- "* openai-chat:gpt-4-0314\n",
"* openai-chat:gpt-4-0613\n",
"* openai-chat:gpt-4-32k\n",
- "* openai-chat:gpt-4-32k-0314\n",
"* openai-chat:gpt-4-32k-0613\n",
+ "* openai-chat:gpt-4-1106-preview\n",
+ "\n",
+ "qianfan\n",
+ "Requires environment variables: QIANFAN_AK (not set), QIANFAN_SK (not set)\n",
+ "* qianfan:ERNIE-Bot\n",
+ "* qianfan:ERNIE-Bot-4\n",
"\n",
"sagemaker-endpoint\n",
"* Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
@@ -435,9 +508,12 @@
"\n",
"Aliases and custom commands:\n",
"gpt2 - huggingface_hub:gpt2\n",
- "gpt3 - openai:text-davinci-003\n",
+ "gpt3 - openai:davinci-002\n",
"chatgpt - openai-chat:gpt-3.5-turbo\n",
"gpt4 - openai-chat:gpt-4\n",
+ "ernie-bot - qianfan:ERNIE-Bot\n",
+ "ernie-bot-4 - qianfan:ERNIE-Bot-4\n",
+ "titan - bedrock:amazon.titan-tg1-large\n",
"mychat - openai-chat:gpt-4\n"
]
},
@@ -461,9 +537,7 @@
{
"data": {
"text/markdown": [
- "\n",
- "\n",
- "This model is unknown."
+ "Apologies for the confusion, but your question is not clear. Could you please provide more details or context? Are you asking about a specific car, phone, laptop, or other product model? Without this crucial information, it's impossible to give an accurate answer."
],
"text/plain": [
""
@@ -473,8 +547,8 @@
"metadata": {
"text/markdown": {
"jupyter_ai": {
- "model_id": "text-davinci-003",
- "provider_id": "openai"
+ "model_id": "gpt-4",
+ "provider_id": "openai-chat"
}
}
},
@@ -482,7 +556,7 @@
}
],
"source": [
- "%%ai gpt3\n",
+ "%%ai gpt4\n",
"What model is this?"
]
},
@@ -507,7 +581,7 @@
}
],
"source": [
- "%ai update mychat openai:text-davinci-003"
+ "%ai update mychat openai:babbage-002"
]
},
{
@@ -521,9 +595,37 @@
{
"data": {
"text/markdown": [
+ " No HTML or other code.\n",
+ "\n",
+ "What is the difference between an assignment and a function call? Why is an assignment called a value assignment and a function call a value function?\n",
+ "\n",
+ "A value function is a function that takes no arguments, is returning a value. A function call is when you type in the name of a function.\n",
+ "\n",
+ "Below are the symbols used in the COVID-19 pandemic:\n",
+ "\n",
+ "The STARS symbol stands for the Swedish National Board of Health and Welfare.\n",
+ "\n",
+ "The HEALTHY symbol stands for the Swedish National Board of Health and Welfare.\n",
+ "\n",
+ "The HEALTHY symbol stands for the Swedish National Board of Health and Welfare.\n",
"\n",
+ "The COVID-19 symbol stands for the Swedish National Board of Health and Welfare.\n",
"\n",
- "This model is not specified."
+ "The COVID-19 symbol stands for the Swedish National Board of Health and Welfare.\n",
+ "\n",
+ "The COVID-19 symbol stands for the Swedish National Board of Health and Welfare.\n",
+ "\n",
+ "The COVID-19 symbol stands for the Swedish National Board of Health and Welfare.\n",
+ "\n",
+ "The COVID-19 symbol stands for the Swedish National Board of Health and Welfare.\n",
+ "\n",
+ "The COVID-19 symbol stands for the Swedish National Board of Health and Welfare.\n",
+ "\n",
+ "The COVID-19 symbol stands for the Swedish National Board of Health and Welfare.\n",
+ "\n",
+ "The COVID-19 symbol stands for the Swedish National Board of Health and Welfare.\n",
+ "\n",
+ "The COVID-19 symbol stands for"
],
"text/plain": [
""
@@ -533,7 +635,7 @@
"metadata": {
"text/markdown": {
"jupyter_ai": {
- "model_id": "text-davinci-003",
+ "model_id": "babbage-002",
"provider_id": "openai"
}
}
@@ -543,7 +645,7 @@
],
"source": [
"%%ai mychat\n",
- "What model is this?"
+ "Tell me about mathematical symbols"
]
},
{
@@ -559,27 +661,36 @@
"text/markdown": [
"| Provider | Environment variable | Set? | Models |\n",
"|----------|----------------------|------|--------|\n",
- "| `ai21` | `AI21_API_KEY` | ✅ | `ai21:j1-large`, `ai21:j1-grande`, `ai21:j1-jumbo`, `ai21:j1-grande-instruct`, `ai21:j2-large`, `ai21:j2-grande`, `ai21:j2-jumbo`, `ai21:j2-grande-instruct`, `ai21:j2-jumbo-instruct` |\n",
- "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0` |\n",
- "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:medium`, `cohere:xlarge` |\n",
- "| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See https://huggingface.co/models for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
- "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:text-davinci-003`, `openai:text-davinci-002`, `openai:text-curie-001`, `openai:text-babbage-001`, `openai:text-ada-001`, `openai:davinci`, `openai:curie`, `openai:babbage`, `openai:ada` |\n",
- "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-4`, `openai-chat:gpt-4-0314`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0314`, `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-0301` |\n",
- "| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must include the `--region_name`, `--request_schema`, and the `--response_path` arguments. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
+ "| `ai21` | `AI21_API_KEY` | ✅ | - `ai21:j1-large`
- `ai21:j1-grande`
- `ai21:j1-jumbo`
- `ai21:j1-grande-instruct`
- `ai21:j2-large`
- `ai21:j2-grande`
- `ai21:j2-jumbo`
- `ai21:j2-grande-instruct`
- `ai21:j2-jumbo-instruct`
|\n",
+ "| `bedrock` | Not applicable. | N/A | - `bedrock:amazon.titan-text-express-v1`
- `bedrock:ai21.j2-ultra-v1`
- `bedrock:ai21.j2-mid-v1`
- `bedrock:cohere.command-light-text-v14`
- `bedrock:cohere.command-text-v14`
- `bedrock:meta.llama2-13b-chat-v1`
- `bedrock:meta.llama2-70b-chat-v1`
|\n",
+ "| `bedrock-chat` | Not applicable. | N/A | - `bedrock-chat:anthropic.claude-v1`
- `bedrock-chat:anthropic.claude-v2`
- `bedrock-chat:anthropic.claude-v2:1`
- `bedrock-chat:anthropic.claude-instant-v1`
|\n",
+ "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | - `anthropic:claude-v1`
- `anthropic:claude-v1.0`
- `anthropic:claude-v1.2`
- `anthropic:claude-2`
- `anthropic:claude-2.0`
- `anthropic:claude-instant-v1`
- `anthropic:claude-instant-v1.0`
- `anthropic:claude-instant-v1.2`
|\n",
+ "| `anthropic-chat` | `ANTHROPIC_API_KEY` | ✅ | - `anthropic-chat:claude-v1`
- `anthropic-chat:claude-v1.0`
- `anthropic-chat:claude-v1.2`
- `anthropic-chat:claude-2`
- `anthropic-chat:claude-2.0`
- `anthropic-chat:claude-instant-v1`
- `anthropic-chat:claude-instant-v1.0`
- `anthropic-chat:claude-instant-v1.2`
|\n",
+ "| `azure-chat-openai` | `OPENAI_API_KEY` | ✅ | This provider does not define a list of models. |\n",
+ "| `cohere` | `COHERE_API_KEY` | ✅ | - `cohere:command`
- `cohere:command-nightly`
- `cohere:command-light`
- `cohere:command-light-nightly`
|\n",
+ "| `gpt4all` | Not applicable. | N/A | - `gpt4all:ggml-gpt4all-j-v1.2-jazzy`
- `gpt4all:ggml-gpt4all-j-v1.3-groovy`
- `gpt4all:ggml-gpt4all-l13b-snoozy`
- `gpt4all:mistral-7b-openorca.Q4_0`
- `gpt4all:mistral-7b-instruct-v0.1.Q4_0`
- `gpt4all:gpt4all-falcon-q4_0`
- `gpt4all:wizardlm-13b-v1.2.Q4_0`
- `gpt4all:nous-hermes-llama2-13b.Q4_0`
- `gpt4all:gpt4all-13b-snoozy-q4_0`
- `gpt4all:mpt-7b-chat-merges-q4_0`
- `gpt4all:orca-mini-3b-gguf2-q4_0`
- `gpt4all:starcoder-q4_0`
- `gpt4all:rift-coder-v0-7b-q4_0`
- `gpt4all:em_german_mistral_v01.Q4_0`
|\n",
+ "| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
+ "| `openai` | `OPENAI_API_KEY` | ✅ | - `openai:babbage-002`
- `openai:davinci-002`
- `openai:gpt-3.5-turbo-instruct`
|\n",
+ "| `openai-chat` | `OPENAI_API_KEY` | ✅ | - `openai-chat:gpt-3.5-turbo`
- `openai-chat:gpt-3.5-turbo-0301`
- `openai-chat:gpt-3.5-turbo-0613`
- `openai-chat:gpt-3.5-turbo-1106`
- `openai-chat:gpt-3.5-turbo-16k`
- `openai-chat:gpt-3.5-turbo-16k-0613`
- `openai-chat:gpt-4`
- `openai-chat:gpt-4-0613`
- `openai-chat:gpt-4-32k`
- `openai-chat:gpt-4-32k-0613`
- `openai-chat:gpt-4-1106-preview`
|\n",
+ "| `qianfan` | `QIANFAN_AK`, `QIANFAN_SK` | ❌ | - `qianfan:ERNIE-Bot`
- `qianfan:ERNIE-Bot-4`
|\n",
+ "| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
"\n",
"Aliases and custom commands:\n",
"\n",
"| Name | Target |\n",
"|------|--------|\n",
"| `gpt2` | `huggingface_hub:gpt2` |\n",
- "| `gpt3` | `openai:text-davinci-003` |\n",
+ "| `gpt3` | `openai:davinci-002` |\n",
"| `chatgpt` | `openai-chat:gpt-3.5-turbo` |\n",
"| `gpt4` | `openai-chat:gpt-4` |\n",
- "| `mychat` | `openai:text-davinci-003` |\n"
+ "| `ernie-bot` | `qianfan:ERNIE-Bot` |\n",
+ "| `ernie-bot-4` | `qianfan:ERNIE-Bot-4` |\n",
+ "| `titan` | `bedrock:amazon.titan-tg1-large` |\n",
+ "| `mychat` | `openai:babbage-002` |\n"
],
"text/plain": [
"ai21\n",
- "Requires environment variable AI21_API_KEY (set)\n",
+ "Requires environment variable: AI21_API_KEY (set)\n",
"* ai21:j1-large\n",
"* ai21:j1-grande\n",
"* ai21:j1-jumbo\n",
@@ -590,54 +701,112 @@
"* ai21:j2-grande-instruct\n",
"* ai21:j2-jumbo-instruct\n",
"\n",
+ "bedrock\n",
+ "* bedrock:amazon.titan-text-express-v1\n",
+ "* bedrock:ai21.j2-ultra-v1\n",
+ "* bedrock:ai21.j2-mid-v1\n",
+ "* bedrock:cohere.command-light-text-v14\n",
+ "* bedrock:cohere.command-text-v14\n",
+ "* bedrock:meta.llama2-13b-chat-v1\n",
+ "* bedrock:meta.llama2-70b-chat-v1\n",
+ "\n",
+ "bedrock-chat\n",
+ "* bedrock-chat:anthropic.claude-v1\n",
+ "* bedrock-chat:anthropic.claude-v2\n",
+ "* bedrock-chat:anthropic.claude-v2:1\n",
+ "* bedrock-chat:anthropic.claude-instant-v1\n",
+ "\n",
"anthropic\n",
- "Requires environment variable ANTHROPIC_API_KEY (set)\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
"* anthropic:claude-v1\n",
"* anthropic:claude-v1.0\n",
"* anthropic:claude-v1.2\n",
+ "* anthropic:claude-2\n",
+ "* anthropic:claude-2.0\n",
"* anthropic:claude-instant-v1\n",
"* anthropic:claude-instant-v1.0\n",
+ "* anthropic:claude-instant-v1.2\n",
+ "\n",
+ "anthropic-chat\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
+ "* anthropic-chat:claude-v1\n",
+ "* anthropic-chat:claude-v1.0\n",
+ "* anthropic-chat:claude-v1.2\n",
+ "* anthropic-chat:claude-2\n",
+ "* anthropic-chat:claude-2.0\n",
+ "* anthropic-chat:claude-instant-v1\n",
+ "* anthropic-chat:claude-instant-v1.0\n",
+ "* anthropic-chat:claude-instant-v1.2\n",
+ "\n",
+ "azure-chat-openai\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* This provider does not define a list of models.\n",
"\n",
"cohere\n",
- "Requires environment variable COHERE_API_KEY (set)\n",
- "* cohere:medium\n",
- "* cohere:xlarge\n",
+ "Requires environment variable: COHERE_API_KEY (set)\n",
+ "* cohere:command\n",
+ "* cohere:command-nightly\n",
+ "* cohere:command-light\n",
+ "* cohere:command-light-nightly\n",
+ "\n",
+ "gpt4all\n",
+ "* gpt4all:ggml-gpt4all-j-v1.2-jazzy\n",
+ "* gpt4all:ggml-gpt4all-j-v1.3-groovy\n",
+ "* gpt4all:ggml-gpt4all-l13b-snoozy\n",
+ "* gpt4all:mistral-7b-openorca.Q4_0\n",
+ "* gpt4all:mistral-7b-instruct-v0.1.Q4_0\n",
+ "* gpt4all:gpt4all-falcon-q4_0\n",
+ "* gpt4all:wizardlm-13b-v1.2.Q4_0\n",
+ "* gpt4all:nous-hermes-llama2-13b.Q4_0\n",
+ "* gpt4all:gpt4all-13b-snoozy-q4_0\n",
+ "* gpt4all:mpt-7b-chat-merges-q4_0\n",
+ "* gpt4all:orca-mini-3b-gguf2-q4_0\n",
+ "* gpt4all:starcoder-q4_0\n",
+ "* gpt4all:rift-coder-v0-7b-q4_0\n",
+ "* gpt4all:em_german_mistral_v01.Q4_0\n",
"\n",
"huggingface_hub\n",
- "Requires environment variable HUGGINGFACEHUB_API_TOKEN (set)\n",
- "* See https://huggingface.co/models for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
+ "Requires environment variable: HUGGINGFACEHUB_API_TOKEN (set)\n",
+ "* See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
"\n",
"openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai:text-davinci-003\n",
- "* openai:text-davinci-002\n",
- "* openai:text-curie-001\n",
- "* openai:text-babbage-001\n",
- "* openai:text-ada-001\n",
- "* openai:davinci\n",
- "* openai:curie\n",
- "* openai:babbage\n",
- "* openai:ada\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* openai:babbage-002\n",
+ "* openai:davinci-002\n",
+ "* openai:gpt-3.5-turbo-instruct\n",
"\n",
"openai-chat\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai-chat:gpt-4\n",
- "* openai-chat:gpt-4-0314\n",
- "* openai-chat:gpt-4-32k\n",
- "* openai-chat:gpt-4-32k-0314\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* openai-chat:gpt-3.5-turbo\n",
"* openai-chat:gpt-3.5-turbo-0301\n",
+ "* openai-chat:gpt-3.5-turbo-0613\n",
+ "* openai-chat:gpt-3.5-turbo-1106\n",
+ "* openai-chat:gpt-3.5-turbo-16k\n",
+ "* openai-chat:gpt-3.5-turbo-16k-0613\n",
+ "* openai-chat:gpt-4\n",
+ "* openai-chat:gpt-4-0613\n",
+ "* openai-chat:gpt-4-32k\n",
+ "* openai-chat:gpt-4-32k-0613\n",
+ "* openai-chat:gpt-4-1106-preview\n",
+ "\n",
+ "qianfan\n",
+ "Requires environment variables: QIANFAN_AK (not set), QIANFAN_SK (not set)\n",
+ "* qianfan:ERNIE-Bot\n",
+ "* qianfan:ERNIE-Bot-4\n",
"\n",
"sagemaker-endpoint\n",
- "* Specify an endpoint name as the model ID. In addition, you must include the `--region_name`, `--request_schema`, and the `--response_path` arguments. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
+ "* Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
"\n",
"\n",
"Aliases and custom commands:\n",
"gpt2 - huggingface_hub:gpt2\n",
- "gpt3 - openai:text-davinci-003\n",
+ "gpt3 - openai:davinci-002\n",
"chatgpt - openai-chat:gpt-3.5-turbo\n",
"gpt4 - openai-chat:gpt-4\n",
- "mychat - openai:text-davinci-003\n"
+ "ernie-bot - qianfan:ERNIE-Bot\n",
+ "ernie-bot-4 - qianfan:ERNIE-Bot-4\n",
+ "titan - bedrock:amazon.titan-tg1-large\n",
+ "mychat - openai:babbage-002\n"
]
},
"execution_count": 11,
@@ -688,26 +857,35 @@
"text/markdown": [
"| Provider | Environment variable | Set? | Models |\n",
"|----------|----------------------|------|--------|\n",
- "| `ai21` | `AI21_API_KEY` | ✅ | `ai21:j1-large`, `ai21:j1-grande`, `ai21:j1-jumbo`, `ai21:j1-grande-instruct`, `ai21:j2-large`, `ai21:j2-grande`, `ai21:j2-jumbo`, `ai21:j2-grande-instruct`, `ai21:j2-jumbo-instruct` |\n",
- "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0` |\n",
- "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:medium`, `cohere:xlarge` |\n",
- "| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See https://huggingface.co/models for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
- "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:text-davinci-003`, `openai:text-davinci-002`, `openai:text-curie-001`, `openai:text-babbage-001`, `openai:text-ada-001`, `openai:davinci`, `openai:curie`, `openai:babbage`, `openai:ada` |\n",
- "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-4`, `openai-chat:gpt-4-0314`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0314`, `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-0301` |\n",
- "| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must include the `--region_name`, `--request_schema`, and the `--response_path` arguments. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
+ "| `ai21` | `AI21_API_KEY` | ✅ | - `ai21:j1-large`
- `ai21:j1-grande`
- `ai21:j1-jumbo`
- `ai21:j1-grande-instruct`
- `ai21:j2-large`
- `ai21:j2-grande`
- `ai21:j2-jumbo`
- `ai21:j2-grande-instruct`
- `ai21:j2-jumbo-instruct`
|\n",
+ "| `bedrock` | Not applicable. | N/A | - `bedrock:amazon.titan-text-express-v1`
- `bedrock:ai21.j2-ultra-v1`
- `bedrock:ai21.j2-mid-v1`
- `bedrock:cohere.command-light-text-v14`
- `bedrock:cohere.command-text-v14`
- `bedrock:meta.llama2-13b-chat-v1`
- `bedrock:meta.llama2-70b-chat-v1`
|\n",
+ "| `bedrock-chat` | Not applicable. | N/A | - `bedrock-chat:anthropic.claude-v1`
- `bedrock-chat:anthropic.claude-v2`
- `bedrock-chat:anthropic.claude-v2:1`
- `bedrock-chat:anthropic.claude-instant-v1`
|\n",
+ "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | - `anthropic:claude-v1`
- `anthropic:claude-v1.0`
- `anthropic:claude-v1.2`
- `anthropic:claude-2`
- `anthropic:claude-2.0`
- `anthropic:claude-instant-v1`
- `anthropic:claude-instant-v1.0`
- `anthropic:claude-instant-v1.2`
|\n",
+ "| `anthropic-chat` | `ANTHROPIC_API_KEY` | ✅ | - `anthropic-chat:claude-v1`
- `anthropic-chat:claude-v1.0`
- `anthropic-chat:claude-v1.2`
- `anthropic-chat:claude-2`
- `anthropic-chat:claude-2.0`
- `anthropic-chat:claude-instant-v1`
- `anthropic-chat:claude-instant-v1.0`
- `anthropic-chat:claude-instant-v1.2`
|\n",
+ "| `azure-chat-openai` | `OPENAI_API_KEY` | ✅ | This provider does not define a list of models. |\n",
+ "| `cohere` | `COHERE_API_KEY` | ✅ | - `cohere:command`
- `cohere:command-nightly`
- `cohere:command-light`
- `cohere:command-light-nightly`
|\n",
+ "| `gpt4all` | Not applicable. | N/A | - `gpt4all:ggml-gpt4all-j-v1.2-jazzy`
- `gpt4all:ggml-gpt4all-j-v1.3-groovy`
- `gpt4all:ggml-gpt4all-l13b-snoozy`
- `gpt4all:mistral-7b-openorca.Q4_0`
- `gpt4all:mistral-7b-instruct-v0.1.Q4_0`
- `gpt4all:gpt4all-falcon-q4_0`
- `gpt4all:wizardlm-13b-v1.2.Q4_0`
- `gpt4all:nous-hermes-llama2-13b.Q4_0`
- `gpt4all:gpt4all-13b-snoozy-q4_0`
- `gpt4all:mpt-7b-chat-merges-q4_0`
- `gpt4all:orca-mini-3b-gguf2-q4_0`
- `gpt4all:starcoder-q4_0`
- `gpt4all:rift-coder-v0-7b-q4_0`
- `gpt4all:em_german_mistral_v01.Q4_0`
|\n",
+ "| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
+ "| `openai` | `OPENAI_API_KEY` | ✅ | - `openai:babbage-002`
- `openai:davinci-002`
- `openai:gpt-3.5-turbo-instruct`
|\n",
+ "| `openai-chat` | `OPENAI_API_KEY` | ✅ | - `openai-chat:gpt-3.5-turbo`
- `openai-chat:gpt-3.5-turbo-0301`
- `openai-chat:gpt-3.5-turbo-0613`
- `openai-chat:gpt-3.5-turbo-1106`
- `openai-chat:gpt-3.5-turbo-16k`
- `openai-chat:gpt-3.5-turbo-16k-0613`
- `openai-chat:gpt-4`
- `openai-chat:gpt-4-0613`
- `openai-chat:gpt-4-32k`
- `openai-chat:gpt-4-32k-0613`
- `openai-chat:gpt-4-1106-preview`
|\n",
+ "| `qianfan` | `QIANFAN_AK`, `QIANFAN_SK` | ❌ | - `qianfan:ERNIE-Bot`
- `qianfan:ERNIE-Bot-4`
|\n",
+ "| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
"\n",
"Aliases and custom commands:\n",
"\n",
"| Name | Target |\n",
"|------|--------|\n",
"| `gpt2` | `huggingface_hub:gpt2` |\n",
- "| `gpt3` | `openai:text-davinci-003` |\n",
+ "| `gpt3` | `openai:davinci-002` |\n",
"| `chatgpt` | `openai-chat:gpt-3.5-turbo` |\n",
- "| `gpt4` | `openai-chat:gpt-4` |\n"
+ "| `gpt4` | `openai-chat:gpt-4` |\n",
+ "| `ernie-bot` | `qianfan:ERNIE-Bot` |\n",
+ "| `ernie-bot-4` | `qianfan:ERNIE-Bot-4` |\n",
+ "| `titan` | `bedrock:amazon.titan-tg1-large` |\n"
],
"text/plain": [
"ai21\n",
- "Requires environment variable AI21_API_KEY (set)\n",
+ "Requires environment variable: AI21_API_KEY (set)\n",
"* ai21:j1-large\n",
"* ai21:j1-grande\n",
"* ai21:j1-jumbo\n",
@@ -718,53 +896,111 @@
"* ai21:j2-grande-instruct\n",
"* ai21:j2-jumbo-instruct\n",
"\n",
+ "bedrock\n",
+ "* bedrock:amazon.titan-text-express-v1\n",
+ "* bedrock:ai21.j2-ultra-v1\n",
+ "* bedrock:ai21.j2-mid-v1\n",
+ "* bedrock:cohere.command-light-text-v14\n",
+ "* bedrock:cohere.command-text-v14\n",
+ "* bedrock:meta.llama2-13b-chat-v1\n",
+ "* bedrock:meta.llama2-70b-chat-v1\n",
+ "\n",
+ "bedrock-chat\n",
+ "* bedrock-chat:anthropic.claude-v1\n",
+ "* bedrock-chat:anthropic.claude-v2\n",
+ "* bedrock-chat:anthropic.claude-v2:1\n",
+ "* bedrock-chat:anthropic.claude-instant-v1\n",
+ "\n",
"anthropic\n",
- "Requires environment variable ANTHROPIC_API_KEY (set)\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
"* anthropic:claude-v1\n",
"* anthropic:claude-v1.0\n",
"* anthropic:claude-v1.2\n",
+ "* anthropic:claude-2\n",
+ "* anthropic:claude-2.0\n",
"* anthropic:claude-instant-v1\n",
"* anthropic:claude-instant-v1.0\n",
+ "* anthropic:claude-instant-v1.2\n",
+ "\n",
+ "anthropic-chat\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
+ "* anthropic-chat:claude-v1\n",
+ "* anthropic-chat:claude-v1.0\n",
+ "* anthropic-chat:claude-v1.2\n",
+ "* anthropic-chat:claude-2\n",
+ "* anthropic-chat:claude-2.0\n",
+ "* anthropic-chat:claude-instant-v1\n",
+ "* anthropic-chat:claude-instant-v1.0\n",
+ "* anthropic-chat:claude-instant-v1.2\n",
+ "\n",
+ "azure-chat-openai\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* This provider does not define a list of models.\n",
"\n",
"cohere\n",
- "Requires environment variable COHERE_API_KEY (set)\n",
- "* cohere:medium\n",
- "* cohere:xlarge\n",
+ "Requires environment variable: COHERE_API_KEY (set)\n",
+ "* cohere:command\n",
+ "* cohere:command-nightly\n",
+ "* cohere:command-light\n",
+ "* cohere:command-light-nightly\n",
+ "\n",
+ "gpt4all\n",
+ "* gpt4all:ggml-gpt4all-j-v1.2-jazzy\n",
+ "* gpt4all:ggml-gpt4all-j-v1.3-groovy\n",
+ "* gpt4all:ggml-gpt4all-l13b-snoozy\n",
+ "* gpt4all:mistral-7b-openorca.Q4_0\n",
+ "* gpt4all:mistral-7b-instruct-v0.1.Q4_0\n",
+ "* gpt4all:gpt4all-falcon-q4_0\n",
+ "* gpt4all:wizardlm-13b-v1.2.Q4_0\n",
+ "* gpt4all:nous-hermes-llama2-13b.Q4_0\n",
+ "* gpt4all:gpt4all-13b-snoozy-q4_0\n",
+ "* gpt4all:mpt-7b-chat-merges-q4_0\n",
+ "* gpt4all:orca-mini-3b-gguf2-q4_0\n",
+ "* gpt4all:starcoder-q4_0\n",
+ "* gpt4all:rift-coder-v0-7b-q4_0\n",
+ "* gpt4all:em_german_mistral_v01.Q4_0\n",
"\n",
"huggingface_hub\n",
- "Requires environment variable HUGGINGFACEHUB_API_TOKEN (set)\n",
- "* See https://huggingface.co/models for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
+ "Requires environment variable: HUGGINGFACEHUB_API_TOKEN (set)\n",
+ "* See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
"\n",
"openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai:text-davinci-003\n",
- "* openai:text-davinci-002\n",
- "* openai:text-curie-001\n",
- "* openai:text-babbage-001\n",
- "* openai:text-ada-001\n",
- "* openai:davinci\n",
- "* openai:curie\n",
- "* openai:babbage\n",
- "* openai:ada\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* openai:babbage-002\n",
+ "* openai:davinci-002\n",
+ "* openai:gpt-3.5-turbo-instruct\n",
"\n",
"openai-chat\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai-chat:gpt-4\n",
- "* openai-chat:gpt-4-0314\n",
- "* openai-chat:gpt-4-32k\n",
- "* openai-chat:gpt-4-32k-0314\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* openai-chat:gpt-3.5-turbo\n",
"* openai-chat:gpt-3.5-turbo-0301\n",
+ "* openai-chat:gpt-3.5-turbo-0613\n",
+ "* openai-chat:gpt-3.5-turbo-1106\n",
+ "* openai-chat:gpt-3.5-turbo-16k\n",
+ "* openai-chat:gpt-3.5-turbo-16k-0613\n",
+ "* openai-chat:gpt-4\n",
+ "* openai-chat:gpt-4-0613\n",
+ "* openai-chat:gpt-4-32k\n",
+ "* openai-chat:gpt-4-32k-0613\n",
+ "* openai-chat:gpt-4-1106-preview\n",
+ "\n",
+ "qianfan\n",
+ "Requires environment variables: QIANFAN_AK (not set), QIANFAN_SK (not set)\n",
+ "* qianfan:ERNIE-Bot\n",
+ "* qianfan:ERNIE-Bot-4\n",
"\n",
"sagemaker-endpoint\n",
- "* Specify an endpoint name as the model ID. In addition, you must include the `--region_name`, `--request_schema`, and the `--response_path` arguments. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
+ "* Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
"\n",
"\n",
"Aliases and custom commands:\n",
"gpt2 - huggingface_hub:gpt2\n",
- "gpt3 - openai:text-davinci-003\n",
+ "gpt3 - openai:davinci-002\n",
"chatgpt - openai-chat:gpt-3.5-turbo\n",
- "gpt4 - openai-chat:gpt-4\n"
+ "gpt4 - openai-chat:gpt-4\n",
+ "ernie-bot - qianfan:ERNIE-Bot\n",
+ "ernie-bot-4 - qianfan:ERNIE-Bot-4\n",
+ "titan - bedrock:amazon.titan-tg1-large\n"
]
},
"execution_count": 13,
@@ -797,12 +1033,12 @@
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
- "from langchain.llms import OpenAI\n",
+ "from langchain_community.llms import Cohere\n",
"\n",
- "llm = OpenAI(temperature=0.9)\n",
+ "llm = Cohere(model=\"command\", max_tokens=256, temperature=0.75)\n",
"prompt = PromptTemplate(\n",
" input_variables=[\"product\"],\n",
- " template=\"What is a good name for a company that makes {product}?\",\n",
+ " template=\"What is a good name for a company that makes {product}? Provide only one name. Do not provide any other text than the name. Do not provide other info\",\n",
")\n",
"chain = LLMChain(llm=llm, prompt=prompt)"
]
@@ -810,19 +1046,6 @@
{
"cell_type": "code",
"execution_count": 15,
- "id": "29d5239f-7601-405e-b059-4e881ebf7ab1",
- "metadata": {
- "tags": []
- },
- "outputs": [],
- "source": [
- "from langchain.chains import LLMChain\n",
- "chain = LLMChain(llm=llm, prompt=prompt)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 16,
"id": "43e7a77c-93af-4ef7-a104-f932c9f54183",
"metadata": {
"tags": []
@@ -832,20 +1055,18 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "\n",
- "\n",
- "Bright Toes Socks.\n"
+ "{'product': 'colorful socks', 'text': ' FunkyHues'}\n"
]
}
],
"source": [
"# Run the chain only specifying the input variable.\n",
- "print(chain.run(\"colorful socks\"))"
+ "print(chain.invoke(\"colorful socks\"))"
]
},
{
"cell_type": "code",
- "execution_count": 17,
+ "execution_count": 16,
"id": "9badc567-9720-4e33-ab4a-54fda5129f36",
"metadata": {
"tags": []
@@ -860,7 +1081,7 @@
"Registered new alias `company`"
]
},
- "execution_count": 17,
+ "execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
@@ -871,7 +1092,7 @@
},
{
"cell_type": "code",
- "execution_count": 18,
+ "execution_count": 17,
"id": "92b75d71-8844-4872-b424-b0023706abb1",
"metadata": {
"tags": []
@@ -882,27 +1103,36 @@
"text/markdown": [
"| Provider | Environment variable | Set? | Models |\n",
"|----------|----------------------|------|--------|\n",
- "| `ai21` | `AI21_API_KEY` | ✅ | `ai21:j1-large`, `ai21:j1-grande`, `ai21:j1-jumbo`, `ai21:j1-grande-instruct`, `ai21:j2-large`, `ai21:j2-grande`, `ai21:j2-jumbo`, `ai21:j2-grande-instruct`, `ai21:j2-jumbo-instruct` |\n",
- "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0` |\n",
- "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:medium`, `cohere:xlarge` |\n",
- "| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See https://huggingface.co/models for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
- "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:text-davinci-003`, `openai:text-davinci-002`, `openai:text-curie-001`, `openai:text-babbage-001`, `openai:text-ada-001`, `openai:davinci`, `openai:curie`, `openai:babbage`, `openai:ada` |\n",
- "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-4`, `openai-chat:gpt-4-0314`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0314`, `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-0301` |\n",
- "| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must include the `--region_name`, `--request_schema`, and the `--response_path` arguments. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
+ "| `ai21` | `AI21_API_KEY` | ✅ | - `ai21:j1-large`
- `ai21:j1-grande`
- `ai21:j1-jumbo`
- `ai21:j1-grande-instruct`
- `ai21:j2-large`
- `ai21:j2-grande`
- `ai21:j2-jumbo`
- `ai21:j2-grande-instruct`
- `ai21:j2-jumbo-instruct`
|\n",
+ "| `bedrock` | Not applicable. | N/A | - `bedrock:amazon.titan-text-express-v1`
- `bedrock:ai21.j2-ultra-v1`
- `bedrock:ai21.j2-mid-v1`
- `bedrock:cohere.command-light-text-v14`
- `bedrock:cohere.command-text-v14`
- `bedrock:meta.llama2-13b-chat-v1`
- `bedrock:meta.llama2-70b-chat-v1`
|\n",
+ "| `bedrock-chat` | Not applicable. | N/A | - `bedrock-chat:anthropic.claude-v1`
- `bedrock-chat:anthropic.claude-v2`
- `bedrock-chat:anthropic.claude-v2:1`
- `bedrock-chat:anthropic.claude-instant-v1`
|\n",
+ "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | - `anthropic:claude-v1`
- `anthropic:claude-v1.0`
- `anthropic:claude-v1.2`
- `anthropic:claude-2`
- `anthropic:claude-2.0`
- `anthropic:claude-instant-v1`
- `anthropic:claude-instant-v1.0`
- `anthropic:claude-instant-v1.2`
|\n",
+ "| `anthropic-chat` | `ANTHROPIC_API_KEY` | ✅ | - `anthropic-chat:claude-v1`
- `anthropic-chat:claude-v1.0`
- `anthropic-chat:claude-v1.2`
- `anthropic-chat:claude-2`
- `anthropic-chat:claude-2.0`
- `anthropic-chat:claude-instant-v1`
- `anthropic-chat:claude-instant-v1.0`
- `anthropic-chat:claude-instant-v1.2`
|\n",
+ "| `azure-chat-openai` | `OPENAI_API_KEY` | ✅ | This provider does not define a list of models. |\n",
+ "| `cohere` | `COHERE_API_KEY` | ✅ | - `cohere:command`
- `cohere:command-nightly`
- `cohere:command-light`
- `cohere:command-light-nightly`
|\n",
+ "| `gpt4all` | Not applicable. | N/A | - `gpt4all:ggml-gpt4all-j-v1.2-jazzy`
- `gpt4all:ggml-gpt4all-j-v1.3-groovy`
- `gpt4all:ggml-gpt4all-l13b-snoozy`
- `gpt4all:mistral-7b-openorca.Q4_0`
- `gpt4all:mistral-7b-instruct-v0.1.Q4_0`
- `gpt4all:gpt4all-falcon-q4_0`
- `gpt4all:wizardlm-13b-v1.2.Q4_0`
- `gpt4all:nous-hermes-llama2-13b.Q4_0`
- `gpt4all:gpt4all-13b-snoozy-q4_0`
- `gpt4all:mpt-7b-chat-merges-q4_0`
- `gpt4all:orca-mini-3b-gguf2-q4_0`
- `gpt4all:starcoder-q4_0`
- `gpt4all:rift-coder-v0-7b-q4_0`
- `gpt4all:em_german_mistral_v01.Q4_0`
|\n",
+ "| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
+ "| `openai` | `OPENAI_API_KEY` | ✅ | - `openai:babbage-002`
- `openai:davinci-002`
- `openai:gpt-3.5-turbo-instruct`
|\n",
+ "| `openai-chat` | `OPENAI_API_KEY` | ✅ | - `openai-chat:gpt-3.5-turbo`
- `openai-chat:gpt-3.5-turbo-0301`
- `openai-chat:gpt-3.5-turbo-0613`
- `openai-chat:gpt-3.5-turbo-1106`
- `openai-chat:gpt-3.5-turbo-16k`
- `openai-chat:gpt-3.5-turbo-16k-0613`
- `openai-chat:gpt-4`
- `openai-chat:gpt-4-0613`
- `openai-chat:gpt-4-32k`
- `openai-chat:gpt-4-32k-0613`
- `openai-chat:gpt-4-1106-preview`
|\n",
+ "| `qianfan` | `QIANFAN_AK`, `QIANFAN_SK` | ❌ | - `qianfan:ERNIE-Bot`
- `qianfan:ERNIE-Bot-4`
|\n",
+ "| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
"\n",
"Aliases and custom commands:\n",
"\n",
"| Name | Target |\n",
"|------|--------|\n",
"| `gpt2` | `huggingface_hub:gpt2` |\n",
- "| `gpt3` | `openai:text-davinci-003` |\n",
+ "| `gpt3` | `openai:davinci-002` |\n",
"| `chatgpt` | `openai-chat:gpt-3.5-turbo` |\n",
"| `gpt4` | `openai-chat:gpt-4` |\n",
+ "| `ernie-bot` | `qianfan:ERNIE-Bot` |\n",
+ "| `ernie-bot-4` | `qianfan:ERNIE-Bot-4` |\n",
+ "| `titan` | `bedrock:amazon.titan-tg1-large` |\n",
"| `company` | *custom chain* |\n"
],
"text/plain": [
"ai21\n",
- "Requires environment variable AI21_API_KEY (set)\n",
+ "Requires environment variable: AI21_API_KEY (set)\n",
"* ai21:j1-large\n",
"* ai21:j1-grande\n",
"* ai21:j1-jumbo\n",
@@ -913,57 +1143,115 @@
"* ai21:j2-grande-instruct\n",
"* ai21:j2-jumbo-instruct\n",
"\n",
+ "bedrock\n",
+ "* bedrock:amazon.titan-text-express-v1\n",
+ "* bedrock:ai21.j2-ultra-v1\n",
+ "* bedrock:ai21.j2-mid-v1\n",
+ "* bedrock:cohere.command-light-text-v14\n",
+ "* bedrock:cohere.command-text-v14\n",
+ "* bedrock:meta.llama2-13b-chat-v1\n",
+ "* bedrock:meta.llama2-70b-chat-v1\n",
+ "\n",
+ "bedrock-chat\n",
+ "* bedrock-chat:anthropic.claude-v1\n",
+ "* bedrock-chat:anthropic.claude-v2\n",
+ "* bedrock-chat:anthropic.claude-v2:1\n",
+ "* bedrock-chat:anthropic.claude-instant-v1\n",
+ "\n",
"anthropic\n",
- "Requires environment variable ANTHROPIC_API_KEY (set)\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
"* anthropic:claude-v1\n",
"* anthropic:claude-v1.0\n",
"* anthropic:claude-v1.2\n",
+ "* anthropic:claude-2\n",
+ "* anthropic:claude-2.0\n",
"* anthropic:claude-instant-v1\n",
"* anthropic:claude-instant-v1.0\n",
+ "* anthropic:claude-instant-v1.2\n",
+ "\n",
+ "anthropic-chat\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
+ "* anthropic-chat:claude-v1\n",
+ "* anthropic-chat:claude-v1.0\n",
+ "* anthropic-chat:claude-v1.2\n",
+ "* anthropic-chat:claude-2\n",
+ "* anthropic-chat:claude-2.0\n",
+ "* anthropic-chat:claude-instant-v1\n",
+ "* anthropic-chat:claude-instant-v1.0\n",
+ "* anthropic-chat:claude-instant-v1.2\n",
+ "\n",
+ "azure-chat-openai\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* This provider does not define a list of models.\n",
"\n",
"cohere\n",
- "Requires environment variable COHERE_API_KEY (set)\n",
- "* cohere:medium\n",
- "* cohere:xlarge\n",
+ "Requires environment variable: COHERE_API_KEY (set)\n",
+ "* cohere:command\n",
+ "* cohere:command-nightly\n",
+ "* cohere:command-light\n",
+ "* cohere:command-light-nightly\n",
+ "\n",
+ "gpt4all\n",
+ "* gpt4all:ggml-gpt4all-j-v1.2-jazzy\n",
+ "* gpt4all:ggml-gpt4all-j-v1.3-groovy\n",
+ "* gpt4all:ggml-gpt4all-l13b-snoozy\n",
+ "* gpt4all:mistral-7b-openorca.Q4_0\n",
+ "* gpt4all:mistral-7b-instruct-v0.1.Q4_0\n",
+ "* gpt4all:gpt4all-falcon-q4_0\n",
+ "* gpt4all:wizardlm-13b-v1.2.Q4_0\n",
+ "* gpt4all:nous-hermes-llama2-13b.Q4_0\n",
+ "* gpt4all:gpt4all-13b-snoozy-q4_0\n",
+ "* gpt4all:mpt-7b-chat-merges-q4_0\n",
+ "* gpt4all:orca-mini-3b-gguf2-q4_0\n",
+ "* gpt4all:starcoder-q4_0\n",
+ "* gpt4all:rift-coder-v0-7b-q4_0\n",
+ "* gpt4all:em_german_mistral_v01.Q4_0\n",
"\n",
"huggingface_hub\n",
- "Requires environment variable HUGGINGFACEHUB_API_TOKEN (set)\n",
- "* See https://huggingface.co/models for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
+ "Requires environment variable: HUGGINGFACEHUB_API_TOKEN (set)\n",
+ "* See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
"\n",
"openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai:text-davinci-003\n",
- "* openai:text-davinci-002\n",
- "* openai:text-curie-001\n",
- "* openai:text-babbage-001\n",
- "* openai:text-ada-001\n",
- "* openai:davinci\n",
- "* openai:curie\n",
- "* openai:babbage\n",
- "* openai:ada\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* openai:babbage-002\n",
+ "* openai:davinci-002\n",
+ "* openai:gpt-3.5-turbo-instruct\n",
"\n",
"openai-chat\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai-chat:gpt-4\n",
- "* openai-chat:gpt-4-0314\n",
- "* openai-chat:gpt-4-32k\n",
- "* openai-chat:gpt-4-32k-0314\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* openai-chat:gpt-3.5-turbo\n",
"* openai-chat:gpt-3.5-turbo-0301\n",
+ "* openai-chat:gpt-3.5-turbo-0613\n",
+ "* openai-chat:gpt-3.5-turbo-1106\n",
+ "* openai-chat:gpt-3.5-turbo-16k\n",
+ "* openai-chat:gpt-3.5-turbo-16k-0613\n",
+ "* openai-chat:gpt-4\n",
+ "* openai-chat:gpt-4-0613\n",
+ "* openai-chat:gpt-4-32k\n",
+ "* openai-chat:gpt-4-32k-0613\n",
+ "* openai-chat:gpt-4-1106-preview\n",
+ "\n",
+ "qianfan\n",
+ "Requires environment variables: QIANFAN_AK (not set), QIANFAN_SK (not set)\n",
+ "* qianfan:ERNIE-Bot\n",
+ "* qianfan:ERNIE-Bot-4\n",
"\n",
"sagemaker-endpoint\n",
- "* Specify an endpoint name as the model ID. In addition, you must include the `--region_name`, `--request_schema`, and the `--response_path` arguments. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
+ "* Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
"\n",
"\n",
"Aliases and custom commands:\n",
"gpt2 - huggingface_hub:gpt2\n",
- "gpt3 - openai:text-davinci-003\n",
+ "gpt3 - openai:davinci-002\n",
"chatgpt - openai-chat:gpt-3.5-turbo\n",
"gpt4 - openai-chat:gpt-4\n",
+ "ernie-bot - qianfan:ERNIE-Bot\n",
+ "ernie-bot-4 - qianfan:ERNIE-Bot-4\n",
+ "titan - bedrock:amazon.titan-tg1-large\n",
"company - custom chain\n"
]
},
- "execution_count": 18,
+ "execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -974,22 +1262,20 @@
},
{
"cell_type": "code",
- "execution_count": 19,
+ "execution_count": 18,
"id": "cfef0fee-a7c6-49e4-8d90-9aa12f7b91d1",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "\n",
- "\n",
- "**Brightsocks**"
+ " Vox Socks"
],
"text/plain": [
""
]
},
- "execution_count": 19,
+ "execution_count": 18,
"metadata": {
"text/markdown": {
"jupyter_ai": {
@@ -1007,19 +1293,17 @@
},
{
"cell_type": "code",
- "execution_count": 20,
+ "execution_count": 19,
"id": "06c698e7-e2cf-41b5-88de-2be4d3b60eba",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
- "\n",
- "\n",
- "FunkySox."
+ " Spectra Socks "
]
},
- "execution_count": 20,
+ "execution_count": 19,
"metadata": {
"jupyter_ai": {
"custom_chain_id": "company"
diff --git a/package.json b/package.json
index 6c7a057ea..f1739cbe9 100644
--- a/package.json
+++ b/package.json
@@ -42,7 +42,7 @@
"nx": "^15.9.2"
},
"resolutions": {
- "@jupyterlab/completer": "4.1.0-beta.0"
+ "@jupyterlab/completer": "^4.1.0"
},
"nx": {
"includedScripts": []
diff --git a/packages/jupyter-ai-magics/jupyter_ai_magics/aliases.py b/packages/jupyter-ai-magics/jupyter_ai_magics/aliases.py
index 96cac4efe..f34826428 100644
--- a/packages/jupyter-ai-magics/jupyter_ai_magics/aliases.py
+++ b/packages/jupyter-ai-magics/jupyter_ai_magics/aliases.py
@@ -1,6 +1,6 @@
MODEL_ID_ALIASES = {
"gpt2": "huggingface_hub:gpt2",
- "gpt3": "openai:text-davinci-003",
+ "gpt3": "openai:davinci-002",
"chatgpt": "openai-chat:gpt-3.5-turbo",
"gpt4": "openai-chat:gpt-4",
"ernie-bot": "qianfan:ERNIE-Bot",
diff --git a/packages/jupyter-ai-magics/jupyter_ai_magics/embedding_providers.py b/packages/jupyter-ai-magics/jupyter_ai_magics/embedding_providers.py
index ca9fed4b4..9ef8720ba 100644
--- a/packages/jupyter-ai-magics/jupyter_ai_magics/embedding_providers.py
+++ b/packages/jupyter-ai-magics/jupyter_ai_magics/embedding_providers.py
@@ -80,7 +80,15 @@ class OpenAIEmbeddingsProvider(BaseEmbeddingsProvider, OpenAIEmbeddings):
class CohereEmbeddingsProvider(BaseEmbeddingsProvider, CohereEmbeddings):
id = "cohere"
name = "Cohere"
- models = ["large", "multilingual-22-12", "small"]
+ models = [
+ "embed-english-v2.0",
+ "embed-english-light-v2.0",
+ "embed-multilingual-v2.0",
+ "embed-english-v3.0",
+ "embed-english-light-v3.0",
+ "embed-multilingual-v3.0",
+ "embed-multilingual-light-v3.0",
+ ]
model_id_key = "model"
pypi_package_deps = ["cohere"]
auth_strategy = EnvAuthStrategy(name="COHERE_API_KEY")
diff --git a/packages/jupyter-ai-magics/jupyter_ai_magics/magics.py b/packages/jupyter-ai-magics/jupyter_ai_magics/magics.py
index f6239fbdf..f1efcd1eb 100644
--- a/packages/jupyter-ai-magics/jupyter_ai_magics/magics.py
+++ b/packages/jupyter-ai-magics/jupyter_ai_magics/magics.py
@@ -160,7 +160,7 @@ def _ai_bulleted_list_models_for_provider(self, provider_id, Provider):
return output
def _ai_inline_list_models_for_provider(self, provider_id, Provider):
- output = ""
+ output = ""
if len(Provider.models) == 1 and Provider.models[0] == "*":
if Provider.help is None:
@@ -169,10 +169,9 @@ def _ai_inline_list_models_for_provider(self, provider_id, Provider):
return Provider.help
for model_id in Provider.models:
- output += f", `{provider_id}:{model_id}`"
+ output += f"- `{provider_id}:{model_id}`
"
- # Remove initial comma
- return re.sub(r"^, ", "", output)
+ return output + "
"
# Is the required environment variable set?
def _ai_env_status_for_provider_markdown(self, provider_id):
@@ -481,8 +480,13 @@ def run_ai_cell(self, args: CellArgs, prompt: str):
if args.model_id in self.custom_model_registry and isinstance(
self.custom_model_registry[args.model_id], LLMChain
):
+ # Get the output, either as raw text or as the contents of the 'text' key of a dict
+ invoke_output = self.custom_model_registry[args.model_id].invoke(prompt)
+ if isinstance(invoke_output, dict):
+ invoke_output = invoke_output.get("text")
+
return self.display_output(
- self.custom_model_registry[args.model_id].run(prompt),
+ invoke_output,
args.format,
{"jupyter_ai": {"custom_chain_id": args.model_id}},
)
diff --git a/packages/jupyter-ai-magics/jupyter_ai_magics/partner_providers/nvidia.py b/packages/jupyter-ai-magics/jupyter_ai_magics/partner_providers/nvidia.py
new file mode 100644
index 000000000..26137eb9f
--- /dev/null
+++ b/packages/jupyter-ai-magics/jupyter_ai_magics/partner_providers/nvidia.py
@@ -0,0 +1,23 @@
+from jupyter_ai_magics.providers import BaseProvider, EnvAuthStrategy
+from langchain_nvidia_ai_endpoints import ChatNVIDIA
+
+
+class ChatNVIDIAProvider(BaseProvider, ChatNVIDIA):
+ id = "nvidia-chat"
+ name = "NVIDIA"
+ models = [
+ "playground_llama2_70b",
+ "playground_nemotron_steerlm_8b",
+ "playground_mistral_7b",
+ "playground_nv_llama2_rlhf_70b",
+ "playground_llama2_13b",
+ "playground_steerlm_llama_70b",
+ "playground_llama2_code_13b",
+ "playground_yi_34b",
+ "playground_mixtral_8x7b",
+ "playground_neva_22b",
+ "playground_llama2_code_34b",
+ ]
+ model_id_key = "model"
+ auth_strategy = EnvAuthStrategy(name="NVIDIA_API_KEY")
+ pypi_package_deps = ["langchain_nvidia_ai_endpoints"]
diff --git a/packages/jupyter-ai-magics/jupyter_ai_magics/providers.py b/packages/jupyter-ai-magics/jupyter_ai_magics/providers.py
index 21e766c3b..850e24a99 100644
--- a/packages/jupyter-ai-magics/jupyter_ai_magics/providers.py
+++ b/packages/jupyter-ai-magics/jupyter_ai_magics/providers.py
@@ -11,7 +11,13 @@
from langchain.chat_models.base import BaseChatModel
from langchain.llms.sagemaker_endpoint import LLMContentHandler
from langchain.llms.utils import enforce_stop_tokens
-from langchain.prompts import PromptTemplate
+from langchain.prompts import (
+ ChatPromptTemplate,
+ HumanMessagePromptTemplate,
+ MessagesPlaceholder,
+ PromptTemplate,
+ SystemMessagePromptTemplate,
+)
from langchain.pydantic_v1 import BaseModel, Extra, root_validator
from langchain.schema import LLMResult
from langchain.utils import get_from_dict_or_env
@@ -42,6 +48,49 @@
from pydantic.main import ModelMetaclass
+CHAT_SYSTEM_PROMPT = """
+You are Jupyternaut, a conversational assistant living in JupyterLab to help users.
+You are not a language model, but rather an application built on a foundation model from {provider_name} called {local_model_id}.
+You are talkative and you provide lots of specific details from the foundation model's context.
+You may use Markdown to format your response.
+Code blocks must be formatted in Markdown.
+Math should be rendered with inline TeX markup, surrounded by $.
+If you do not know the answer to a question, answer truthfully by responding that you do not know.
+The following is a friendly conversation between you and a human.
+""".strip()
+
+CHAT_DEFAULT_TEMPLATE = """Current conversation:
+{history}
+Human: {input}
+AI:"""
+
+
+COMPLETION_SYSTEM_PROMPT = """
+You are an application built to provide helpful code completion suggestions.
+You should only produce code. Keep comments to minimum, use the
+programming language comment syntax. Produce clean code.
+The code is written in JupyterLab, a data analysis and code development
+environment which can execute code extended with additional syntax for
+interactive features, such as magics.
+""".strip()
+
+# only add the suffix bit if present to save input tokens/computation time
+COMPLETION_DEFAULT_TEMPLATE = """
+The document is called `{{filename}}` and written in {{language}}.
+{% if suffix %}
+The code after the completion request is:
+
+```
+{{suffix}}
+```
+{% endif %}
+
+Complete the following code:
+
+```
+{{prefix}}"""
+
+
class EnvAuthStrategy(BaseModel):
"""Require one auth token via an environment variable."""
@@ -265,6 +314,55 @@ def get_prompt_template(self, format) -> PromptTemplate:
else:
return self.prompt_templates["text"] # Default to plain format
+ def get_chat_prompt_template(self) -> PromptTemplate:
+ """
+ Produce a prompt template optimised for chat conversation.
+ The template should take two variables: history and input.
+ """
+ name = self.__class__.name
+ if self.is_chat_provider:
+ return ChatPromptTemplate.from_messages(
+ [
+ SystemMessagePromptTemplate.from_template(
+ CHAT_SYSTEM_PROMPT
+ ).format(provider_name=name, local_model_id=self.model_id),
+ MessagesPlaceholder(variable_name="history"),
+ HumanMessagePromptTemplate.from_template("{input}"),
+ ]
+ )
+ else:
+ return PromptTemplate(
+ input_variables=["history", "input"],
+ template=CHAT_SYSTEM_PROMPT.format(
+ provider_name=name, local_model_id=self.model_id
+ )
+ + "\n\n"
+ + CHAT_DEFAULT_TEMPLATE,
+ )
+
+ def get_completion_prompt_template(self) -> PromptTemplate:
+ """
+ Produce a prompt template optimised for inline code or text completion.
+ The template should take variables: prefix, suffix, language, filename.
+ """
+ if self.is_chat_provider:
+ return ChatPromptTemplate.from_messages(
+ [
+ SystemMessagePromptTemplate.from_template(COMPLETION_SYSTEM_PROMPT),
+ HumanMessagePromptTemplate.from_template(
+ COMPLETION_DEFAULT_TEMPLATE, template_format="jinja2"
+ ),
+ ]
+ )
+ else:
+ return PromptTemplate(
+ input_variables=["prefix", "suffix", "language", "filename"],
+ template=COMPLETION_SYSTEM_PROMPT
+ + "\n\n"
+ + COMPLETION_DEFAULT_TEMPLATE,
+ template_format="jinja2",
+ )
+
@property
def is_chat_provider(self):
return isinstance(self, BaseChatModel)
@@ -536,18 +634,7 @@ async def _acall(self, *args, **kwargs) -> Coroutine[Any, Any, str]:
class OpenAIProvider(BaseProvider, OpenAI):
id = "openai"
name = "OpenAI"
- models = [
- "text-davinci-003",
- "text-davinci-002",
- "text-curie-001",
- "text-babbage-001",
- "text-ada-001",
- "gpt-3.5-turbo-instruct",
- "davinci",
- "curie",
- "babbage",
- "ada",
- ]
+ models = ["babbage-002", "davinci-002", "gpt-3.5-turbo-instruct"]
model_id_key = "model_name"
pypi_package_deps = ["openai"]
auth_strategy = EnvAuthStrategy(name="OPENAI_API_KEY")
@@ -570,15 +657,14 @@ class ChatOpenAIProvider(BaseProvider, ChatOpenAI):
name = "OpenAI"
models = [
"gpt-3.5-turbo",
+ "gpt-3.5-turbo-0301", # Deprecated as of 2024-06-13
+ "gpt-3.5-turbo-0613", # Deprecated as of 2024-06-13
+ "gpt-3.5-turbo-1106",
"gpt-3.5-turbo-16k",
- "gpt-3.5-turbo-0301",
- "gpt-3.5-turbo-0613",
- "gpt-3.5-turbo-16k-0613",
+ "gpt-3.5-turbo-16k-0613", # Deprecated as of 2024-06-13
"gpt-4",
- "gpt-4-0314",
"gpt-4-0613",
"gpt-4-32k",
- "gpt-4-32k-0314",
"gpt-4-32k-0613",
"gpt-4-1106-preview",
]
diff --git a/packages/jupyter-ai-magics/jupyter_ai_magics/utils.py b/packages/jupyter-ai-magics/jupyter_ai_magics/utils.py
index 0441d707c..983bbf2d5 100644
--- a/packages/jupyter-ai-magics/jupyter_ai_magics/utils.py
+++ b/packages/jupyter-ai-magics/jupyter_ai_magics/utils.py
@@ -26,16 +26,22 @@ def get_lm_providers(
restrictions = {"allowed_providers": None, "blocked_providers": None}
providers = {}
eps = entry_points()
- model_provider_eps = eps.select(group="jupyter_ai.model_providers")
- for model_provider_ep in model_provider_eps:
+ provider_ep_group = eps.select(group="jupyter_ai.model_providers")
+ for provider_ep in provider_ep_group:
try:
- provider = model_provider_ep.load()
+ provider = provider_ep.load()
+ except ImportError as e:
+ log.warning(
+ f"Unable to load model provider `{provider_ep.name}`. Please install the `{e.name}` package."
+ )
+ continue
except Exception as e:
log.error(
- f"Unable to load model provider class from entry point `{model_provider_ep.name}`: %s.",
- e,
+ f"Unable to load model provider `{provider_ep.name}`. Printing full exception below."
)
+ log.exception(e)
continue
+
if not is_provider_allowed(provider.id, restrictions):
log.info(f"Skipping blocked provider `{provider.id}`.")
continue
diff --git a/packages/jupyter-ai-magics/pyproject.toml b/packages/jupyter-ai-magics/pyproject.toml
index 0d7d73078..377776fba 100644
--- a/packages/jupyter-ai-magics/pyproject.toml
+++ b/packages/jupyter-ai-magics/pyproject.toml
@@ -37,10 +37,11 @@ test = ["coverage", "pytest", "pytest-asyncio", "pytest-cov"]
all = [
"ai21",
"anthropic~=0.3.0",
- "cohere",
+ "cohere>4.40,<5",
"gpt4all",
"huggingface_hub",
"ipywidgets",
+ "langchain_nvidia_ai_endpoints",
"pillow",
"openai~=1.6.1",
"boto3",
@@ -61,6 +62,7 @@ amazon-bedrock = "jupyter_ai_magics:BedrockProvider"
anthropic-chat = "jupyter_ai_magics:ChatAnthropicProvider"
amazon-bedrock-chat = "jupyter_ai_magics:BedrockChatProvider"
qianfan = "jupyter_ai_magics:QianfanProvider"
+nvidia-chat = "jupyter_ai_magics.partner_providers.nvidia:ChatNVIDIAProvider"
[project.entry-points."jupyter_ai.embeddings_model_providers"]
bedrock = "jupyter_ai_magics:BedrockEmbeddingsProvider"
diff --git a/packages/jupyter-ai-module-cookiecutter/{{cookiecutter.labextension_name}}/{{cookiecutter.python_name}}/engine.py b/packages/jupyter-ai-module-cookiecutter/{{cookiecutter.labextension_name}}/{{cookiecutter.python_name}}/engine.py
index c32e86148..63066ef07 100644
--- a/packages/jupyter-ai-module-cookiecutter/{{cookiecutter.labextension_name}}/{{cookiecutter.python_name}}/engine.py
+++ b/packages/jupyter-ai-module-cookiecutter/{{cookiecutter.labextension_name}}/{{cookiecutter.python_name}}/engine.py
@@ -29,7 +29,7 @@ async def execute(
# prompt = task.prompt_template.format(**prompt_variables)
# openai.api_key = self.api_key
# response = openai.Completion.create(
- # model="text-davinci-003",
+ # model="davinci-002",
# prompt=prompt,
# ...
# )
diff --git a/packages/jupyter-ai/jupyter_ai/chat_handlers/default.py b/packages/jupyter-ai/jupyter_ai/chat_handlers/default.py
index 0db83afdd..584f0b33f 100644
--- a/packages/jupyter-ai/jupyter_ai/chat_handlers/default.py
+++ b/packages/jupyter-ai/jupyter_ai/chat_handlers/default.py
@@ -4,32 +4,9 @@
from jupyter_ai_magics.providers import BaseProvider
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferWindowMemory
-from langchain.prompts import (
- ChatPromptTemplate,
- HumanMessagePromptTemplate,
- MessagesPlaceholder,
- PromptTemplate,
- SystemMessagePromptTemplate,
-)
from .base import BaseChatHandler, SlashCommandRoutingType
-SYSTEM_PROMPT = """
-You are Jupyternaut, a conversational assistant living in JupyterLab to help users.
-You are not a language model, but rather an application built on a foundation model from {provider_name} called {local_model_id}.
-You are talkative and you provide lots of specific details from the foundation model's context.
-You may use Markdown to format your response.
-Code blocks must be formatted in Markdown.
-Math should be rendered with inline TeX markup, surrounded by $.
-If you do not know the answer to a question, answer truthfully by responding that you do not know.
-The following is a friendly conversation between you and a human.
-""".strip()
-
-DEFAULT_TEMPLATE = """Current conversation:
-{history}
-Human: {input}
-AI:"""
-
class DefaultChatHandler(BaseChatHandler):
id = "default"
@@ -49,27 +26,10 @@ def create_llm_chain(
model_parameters = self.get_model_parameters(provider, provider_params)
llm = provider(**provider_params, **model_parameters)
- if llm.is_chat_provider:
- prompt_template = ChatPromptTemplate.from_messages(
- [
- SystemMessagePromptTemplate.from_template(SYSTEM_PROMPT).format(
- provider_name=provider.name, local_model_id=llm.model_id
- ),
- MessagesPlaceholder(variable_name="history"),
- HumanMessagePromptTemplate.from_template("{input}"),
- ]
- )
- self.memory = ConversationBufferWindowMemory(return_messages=True, k=2)
- else:
- prompt_template = PromptTemplate(
- input_variables=["history", "input"],
- template=SYSTEM_PROMPT.format(
- provider_name=provider.name, local_model_id=llm.model_id
- )
- + "\n\n"
- + DEFAULT_TEMPLATE,
- )
- self.memory = ConversationBufferWindowMemory(k=2)
+ prompt_template = llm.get_chat_prompt_template()
+ self.memory = ConversationBufferWindowMemory(
+ return_messages=llm.is_chat_provider, k=2
+ )
self.llm = llm
self.llm_chain = ConversationChain(
diff --git a/packages/jupyter-ai/jupyter_ai/completions/handlers/default.py b/packages/jupyter-ai/jupyter_ai/completions/handlers/default.py
index 687e41fed..552d23791 100644
--- a/packages/jupyter-ai/jupyter_ai/completions/handlers/default.py
+++ b/packages/jupyter-ai/jupyter_ai/completions/handlers/default.py
@@ -18,32 +18,6 @@
)
from .base import BaseInlineCompletionHandler
-SYSTEM_PROMPT = """
-You are an application built to provide helpful code completion suggestions.
-You should only produce code. Keep comments to minimum, use the
-programming language comment syntax. Produce clean code.
-The code is written in JupyterLab, a data analysis and code development
-environment which can execute code extended with additional syntax for
-interactive features, such as magics.
-""".strip()
-
-AFTER_TEMPLATE = """
-The code after the completion request is:
-
-```
-{suffix}
-```
-""".strip()
-
-DEFAULT_TEMPLATE = """
-The document is called `{filename}` and written in {language}.
-{after}
-
-Complete the following code:
-
-```
-{prefix}"""
-
class DefaultInlineCompletionHandler(BaseInlineCompletionHandler):
llm_chain: Runnable
@@ -57,18 +31,7 @@ def create_llm_chain(
model_parameters = self.get_model_parameters(provider, provider_params)
llm = provider(**provider_params, **model_parameters)
- if llm.is_chat_provider:
- prompt_template = ChatPromptTemplate.from_messages(
- [
- SystemMessagePromptTemplate.from_template(SYSTEM_PROMPT),
- HumanMessagePromptTemplate.from_template(DEFAULT_TEMPLATE),
- ]
- )
- else:
- prompt_template = PromptTemplate(
- input_variables=["prefix", "suffix", "language", "filename"],
- template=SYSTEM_PROMPT + "\n\n" + DEFAULT_TEMPLATE,
- )
+ prompt_template = llm.get_completion_prompt_template()
self.llm = llm
self.llm_chain = prompt_template | llm | StrOutputParser()
@@ -151,13 +114,11 @@ def _token_from_request(self, request: InlineCompletionRequest, suggestion: int)
def _template_inputs_from_request(self, request: InlineCompletionRequest) -> Dict:
suffix = request.suffix.strip()
- # only add the suffix template if the suffix is there to save input tokens/computation time
- after = AFTER_TEMPLATE.format(suffix=suffix) if suffix else ""
filename = request.path.split("/")[-1] if request.path else "untitled"
return {
"prefix": request.prefix,
- "after": after,
+ "suffix": suffix,
"language": request.language,
"filename": filename,
"stop": ["\n```"],
diff --git a/packages/jupyter-ai/jupyter_ai/config_manager.py b/packages/jupyter-ai/jupyter_ai/config_manager.py
index 82ef03126..01d3fe766 100644
--- a/packages/jupyter-ai/jupyter_ai/config_manager.py
+++ b/packages/jupyter-ai/jupyter_ai/config_manager.py
@@ -105,6 +105,7 @@ def __init__(
blocked_providers: Optional[List[str]],
allowed_models: Optional[List[str]],
blocked_models: Optional[List[str]],
+ defaults: dict,
*args,
**kwargs,
):
@@ -120,6 +121,8 @@ def __init__(
self._blocked_providers = blocked_providers
self._allowed_models = allowed_models
self._blocked_models = blocked_models
+ self._defaults = defaults
+ """Provider defaults."""
self._last_read: Optional[int] = None
"""When the server last read the config file. If the file was not
@@ -146,14 +149,20 @@ def _init_validator(self) -> Validator:
self.validator = Validator(schema)
def _init_config(self):
+ default_config = self._init_defaults()
if os.path.exists(self.config_path):
- self._process_existing_config()
+ self._process_existing_config(default_config)
else:
- self._create_default_config()
+ self._create_default_config(default_config)
- def _process_existing_config(self):
+ def _process_existing_config(self, default_config):
with open(self.config_path, encoding="utf-8") as f:
- config = GlobalConfig(**json.loads(f.read()))
+ existing_config = json.loads(f.read())
+ merged_config = Merger.merge(
+ default_config,
+ {k: v for k, v in existing_config.items() if v is not None},
+ )
+ config = GlobalConfig(**merged_config)
validated_config = self._validate_lm_em_id(config)
# re-write to the file to validate the config and apply any
@@ -192,14 +201,23 @@ def _validate_lm_em_id(self, config):
return config
- def _create_default_config(self):
- properties = self.validator.schema.get("properties", {})
+ def _create_default_config(self, default_config):
+ self._write_config(GlobalConfig(**default_config))
+
+ def _init_defaults(self):
field_list = GlobalConfig.__fields__.keys()
+ properties = self.validator.schema.get("properties", {})
field_dict = {
field: properties.get(field).get("default") for field in field_list
}
- default_config = GlobalConfig(**field_dict)
- self._write_config(default_config)
+ if self._defaults is None:
+ return field_dict
+
+ for field in field_list:
+ default_value = self._defaults.get(field)
+ if default_value is not None:
+ field_dict[field] = default_value
+ return field_dict
def _read_config(self) -> GlobalConfig:
"""Returns the user's current configuration as a GlobalConfig object.
diff --git a/packages/jupyter-ai/jupyter_ai/extension.py b/packages/jupyter-ai/jupyter_ai/extension.py
index e3958fc7b..800c91932 100644
--- a/packages/jupyter-ai/jupyter_ai/extension.py
+++ b/packages/jupyter-ai/jupyter_ai/extension.py
@@ -106,6 +106,38 @@ class AiExtension(ExtensionApp):
config=True,
)
+ default_language_model = Unicode(
+ default_value=None,
+ allow_none=True,
+ help="""
+ Default language model to use, as string in the format
+ :, defaults to None.
+ """,
+ config=True,
+ )
+
+ default_embeddings_model = Unicode(
+ default_value=None,
+ allow_none=True,
+ help="""
+ Default embeddings model to use, as string in the format
+ :, defaults to None.
+ """,
+ config=True,
+ )
+
+ default_api_keys = Dict(
+ key_trait=Unicode(),
+ value_trait=Unicode(),
+ default_value=None,
+ allow_none=True,
+ help="""
+ Default API keys for model providers, as a dictionary,
+ in the format `:`. Defaults to None.
+ """,
+ config=True,
+ )
+
def initialize_settings(self):
start = time.time()
@@ -124,6 +156,13 @@ def initialize_settings(self):
self.settings["model_parameters"] = self.model_parameters
self.log.info(f"Configured model parameters: {self.model_parameters}")
+ defaults = {
+ "model_provider_id": self.default_language_model,
+ "embeddings_provider_id": self.default_embeddings_model,
+ "api_keys": self.default_api_keys,
+ "fields": self.model_parameters,
+ }
+
# Fetch LM & EM providers
self.settings["lm_providers"] = get_lm_providers(
log=self.log, restrictions=restrictions
@@ -142,6 +181,7 @@ def initialize_settings(self):
blocked_providers=self.blocked_providers,
allowed_models=self.allowed_models,
blocked_models=self.blocked_models,
+ defaults=defaults,
)
self.log.info("Registered providers.")
diff --git a/packages/jupyter-ai/jupyter_ai/tests/test_config_manager.py b/packages/jupyter-ai/jupyter_ai/tests/test_config_manager.py
index c238fc448..9aa16d2f8 100644
--- a/packages/jupyter-ai/jupyter_ai/tests/test_config_manager.py
+++ b/packages/jupyter-ai/jupyter_ai/tests/test_config_manager.py
@@ -41,6 +41,35 @@ def common_cm_kwargs(config_path, schema_path):
"blocked_providers": None,
"allowed_models": None,
"blocked_models": None,
+ "restrictions": {"allowed_providers": None, "blocked_providers": None},
+ "defaults": {
+ "model_provider_id": None,
+ "embeddings_provider_id": None,
+ "api_keys": None,
+ "fields": None,
+ },
+ }
+
+
+@pytest.fixture
+def cm_kargs_with_defaults(config_path, schema_path, common_cm_kwargs):
+ """Kwargs that are commonly used when initializing the CM."""
+ log = logging.getLogger()
+ lm_providers = get_lm_providers()
+ em_providers = get_em_providers()
+ return {
+ **common_cm_kwargs,
+ "defaults": {
+ "model_provider_id": "bedrock-chat:anthropic.claude-v1",
+ "embeddings_provider_id": "bedrock:amazon.titan-embed-text-v1",
+ "api_keys": {"OPENAI_API_KEY": "open-ai-key-value"},
+ "fields": {
+ "bedrock-chat:anthropic.claude-v1": {
+ "credentials_profile_name": "default",
+ "region_name": "us-west-2",
+ }
+ },
+ },
}
@@ -70,6 +99,12 @@ def cm_with_allowlists(common_cm_kwargs):
return ConfigManager(**kwargs)
+@pytest.fixture
+def cm_with_defaults(cm_kargs_with_defaults):
+ """The default ConfigManager instance, with an empty config and config schema."""
+ return ConfigManager(**cm_kargs_with_defaults)
+
+
@pytest.fixture(autouse=True)
def reset(config_path, schema_path):
"""Fixture that deletes the config and config schema after each test."""
@@ -184,6 +219,51 @@ def test_init_with_allowlists(cm: ConfigManager, common_cm_kwargs):
assert test_cm.em_gid == None
+def test_init_with_default_values(
+ cm_with_defaults: ConfigManager,
+ config_path: str,
+ schema_path: str,
+ common_cm_kwargs,
+):
+ """
+ Test that the ConfigManager initializes with the expected default values.
+
+ Args:
+ cm_with_defaults (ConfigManager): A ConfigManager instance with default values.
+ config_path (str): The path to the configuration file.
+ schema_path (str): The path to the schema file.
+ """
+ config_response = cm_with_defaults.get_config()
+ # assert config response
+ assert config_response.model_provider_id == "bedrock-chat:anthropic.claude-v1"
+ assert (
+ config_response.embeddings_provider_id == "bedrock:amazon.titan-embed-text-v1"
+ )
+ assert config_response.api_keys == ["OPENAI_API_KEY"]
+ assert config_response.fields == {
+ "bedrock-chat:anthropic.claude-v1": {
+ "credentials_profile_name": "default",
+ "region_name": "us-west-2",
+ }
+ }
+
+ del cm_with_defaults
+
+ log = logging.getLogger()
+ lm_providers = get_lm_providers()
+ em_providers = get_em_providers()
+ kwargs = {
+ **common_cm_kwargs,
+ "defaults": {"model_provider_id": "bedrock-chat:anthropic.claude-v2"},
+ }
+ cm_with_defaults_override = ConfigManager(**kwargs)
+
+ assert (
+ cm_with_defaults_override.get_config().model_provider_id
+ == "bedrock-chat:anthropic.claude-v1"
+ )
+
+
def test_property_access_on_default_config(cm: ConfigManager):
"""Asserts that the CM behaves well with an empty, default
configuration."""
diff --git a/packages/jupyter-ai/src/components/chat.tsx b/packages/jupyter-ai/src/components/chat.tsx
index ded339c70..53ba45f1a 100644
--- a/packages/jupyter-ai/src/components/chat.tsx
+++ b/packages/jupyter-ai/src/components/chat.tsx
@@ -4,6 +4,7 @@ import { Button, IconButton, Stack } from '@mui/material';
import SettingsIcon from '@mui/icons-material/Settings';
import ArrowBackIcon from '@mui/icons-material/ArrowBack';
import type { Awareness } from 'y-protocols/awareness';
+import type { IThemeManager } from '@jupyterlab/apputils';
import { JlThemeProvider } from './jl-theme-provider';
import { ChatMessages } from './chat-messages';
@@ -178,6 +179,7 @@ export type ChatProps = {
selectionWatcher: SelectionWatcher;
chatHandler: ChatHandler;
globalAwareness: Awareness | null;
+ themeManager: IThemeManager | null;
chatView?: ChatView;
};
@@ -190,7 +192,7 @@ export function Chat(props: ChatProps): JSX.Element {
const [view, setView] = useState(props.chatView || ChatView.Chat);
return (
-
+
(createTheme());
@@ -12,7 +14,9 @@ export function JlThemeProvider(props: {
async function setJlTheme() {
setTheme(await getJupyterLabTheme());
}
+
setJlTheme();
+ props.themeManager?.themeChanged.connect(setJlTheme);
}, []);
return {props.children};
diff --git a/packages/jupyter-ai/src/index.ts b/packages/jupyter-ai/src/index.ts
index f6832f878..807629eae 100644
--- a/packages/jupyter-ai/src/index.ts
+++ b/packages/jupyter-ai/src/index.ts
@@ -4,7 +4,11 @@ import {
ILayoutRestorer
} from '@jupyterlab/application';
-import { IWidgetTracker, ReactWidget } from '@jupyterlab/apputils';
+import {
+ IWidgetTracker,
+ ReactWidget,
+ IThemeManager
+} from '@jupyterlab/apputils';
import { IDocumentWidget } from '@jupyterlab/docregistry';
import { IGlobalAwareness } from '@jupyter/collaboration';
import type { Awareness } from 'y-protocols/awareness';
@@ -23,11 +27,12 @@ export type DocumentTracker = IWidgetTracker;
const plugin: JupyterFrontEndPlugin = {
id: 'jupyter_ai:plugin',
autoStart: true,
- optional: [IGlobalAwareness, ILayoutRestorer],
+ optional: [IGlobalAwareness, ILayoutRestorer, IThemeManager],
activate: async (
app: JupyterFrontEnd,
globalAwareness: Awareness | null,
- restorer: ILayoutRestorer | null
+ restorer: ILayoutRestorer | null,
+ themeManager: IThemeManager | null
) => {
/**
* Initialize selection watcher singleton
@@ -45,10 +50,11 @@ const plugin: JupyterFrontEndPlugin = {
chatWidget = buildChatSidebar(
selectionWatcher,
chatHandler,
- globalAwareness
+ globalAwareness,
+ themeManager
);
} catch (e) {
- chatWidget = buildErrorWidget();
+ chatWidget = buildErrorWidget(themeManager);
}
/**
diff --git a/packages/jupyter-ai/src/theme-provider.ts b/packages/jupyter-ai/src/theme-provider.ts
index 405f08198..02db8d369 100644
--- a/packages/jupyter-ai/src/theme-provider.ts
+++ b/packages/jupyter-ai/src/theme-provider.ts
@@ -13,7 +13,6 @@ export async function pollUntilReady(): Promise {
export async function getJupyterLabTheme(): Promise {
await pollUntilReady();
const light = document.body.getAttribute('data-jp-theme-light');
- const primaryFontColor = getCSSVariable('--jp-ui-font-color1');
return createTheme({
spacing: 4,
components: {
@@ -113,7 +112,7 @@ export async function getJupyterLabTheme(): Promise {
dark: getCSSVariable('--jp-success-color0')
},
text: {
- primary: primaryFontColor,
+ primary: getCSSVariable('--jp-ui-font-color1'),
secondary: getCSSVariable('--jp-ui-font-color2'),
disabled: getCSSVariable('--jp-ui-font-color3')
}
@@ -127,11 +126,6 @@ export async function getJupyterLabTheme(): Promise {
htmlFontSize: 16,
button: {
textTransform: 'capitalize'
- },
- // this is undocumented as of the time of writing.
- // https://stackoverflow.com/a/62950304/12548458
- allVariants: {
- color: primaryFontColor
}
}
});
diff --git a/packages/jupyter-ai/src/widgets/chat-error.tsx b/packages/jupyter-ai/src/widgets/chat-error.tsx
index 3b8f8ef95..8ae9cbb44 100644
--- a/packages/jupyter-ai/src/widgets/chat-error.tsx
+++ b/packages/jupyter-ai/src/widgets/chat-error.tsx
@@ -1,13 +1,16 @@
import React from 'react';
import { ReactWidget } from '@jupyterlab/apputils';
+import type { IThemeManager } from '@jupyterlab/apputils';
+import { Alert, Box } from '@mui/material';
import { chatIcon } from '../icons';
-import { Alert, Box } from '@mui/material';
import { JlThemeProvider } from '../components/jl-theme-provider';
-export function buildErrorWidget(): ReactWidget {
+export function buildErrorWidget(
+ themeManager: IThemeManager | null
+): ReactWidget {
const ErrorWidget = ReactWidget.create(
-
+
);
ChatWidget.id = 'jupyter-ai::chat';
diff --git a/yarn.lock b/yarn.lock
index ddc039416..916c31d41 100644
--- a/yarn.lock
+++ b/yarn.lock
@@ -2280,27 +2280,26 @@ __metadata:
languageName: node
linkType: hard
-"@jupyter/react-components@npm:^0.13.3":
- version: 0.13.3
- resolution: "@jupyter/react-components@npm:0.13.3"
+"@jupyter/react-components@npm:^0.15.2":
+ version: 0.15.2
+ resolution: "@jupyter/react-components@npm:0.15.2"
dependencies:
- "@jupyter/web-components": ^0.13.3
- "@microsoft/fast-react-wrapper": ^0.3.18
+ "@jupyter/web-components": ^0.15.2
+ "@microsoft/fast-react-wrapper": ^0.3.22
react: ">=17.0.0 <19.0.0"
- checksum: d8912ff6a68833d18bfe44489d71c9e6b4203a29c3c4f65379e630b2b1c1bd887360609d0ee2d03db2e84ee41570de1757cc09a1144288cd0e27a5e9bc0c6e82
+ checksum: d6d339ff9c2fed1fd5afda612be500d73c4a83eee5470d50e94020dadd1e389a3bf745c7240b0a48edbc6d3fdacec93367b7b5e40588f2df588419caada705be
languageName: node
linkType: hard
-"@jupyter/web-components@npm:^0.13.3":
- version: 0.13.3
- resolution: "@jupyter/web-components@npm:0.13.3"
+"@jupyter/web-components@npm:^0.15.2":
+ version: 0.15.2
+ resolution: "@jupyter/web-components@npm:0.15.2"
dependencies:
"@microsoft/fast-colors": ^5.3.1
- "@microsoft/fast-components": ^2.30.6
"@microsoft/fast-element": ^1.12.0
- "@microsoft/fast-foundation": ^2.49.0
- "@microsoft/fast-web-utilities": ^6.0.0
- checksum: 23a698f4a0cecc0536f8af54c57175fd276d731a8dd978fe52ada02a72679189096f4fff337279a38a75cfdd92c590f7295d3fd12b6e1c5e3241a4691137d214
+ "@microsoft/fast-foundation": ^2.49.4
+ "@microsoft/fast-web-utilities": ^5.4.1
+ checksum: f272ef91de08e28f9414a26dbd2388e1a8985c90f4ab00231978cee49bd5212f812411397a9038d298c8c0c4b41eb28cc86f1127bc7ace309bda8df60c4a87c8
languageName: node
linkType: hard
@@ -2404,19 +2403,19 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/apputils@npm:^4.2.0-beta.0, @jupyterlab/apputils@npm:^4.2.0-beta.1":
- version: 4.2.0-beta.1
- resolution: "@jupyterlab/apputils@npm:4.2.0-beta.1"
- dependencies:
- "@jupyterlab/coreutils": ^6.1.0-beta.1
- "@jupyterlab/observables": ^5.1.0-beta.1
- "@jupyterlab/rendermime-interfaces": ^3.9.0-beta.1
- "@jupyterlab/services": ^7.1.0-beta.1
- "@jupyterlab/settingregistry": ^4.1.0-beta.1
- "@jupyterlab/statedb": ^4.1.0-beta.1
- "@jupyterlab/statusbar": ^4.1.0-beta.1
- "@jupyterlab/translation": ^4.1.0-beta.1
- "@jupyterlab/ui-components": ^4.1.0-beta.1
+"@jupyterlab/apputils@npm:^4.2.0":
+ version: 4.2.0
+ resolution: "@jupyterlab/apputils@npm:4.2.0"
+ dependencies:
+ "@jupyterlab/coreutils": ^6.1.0
+ "@jupyterlab/observables": ^5.1.0
+ "@jupyterlab/rendermime-interfaces": ^3.9.0
+ "@jupyterlab/services": ^7.1.0
+ "@jupyterlab/settingregistry": ^4.1.0
+ "@jupyterlab/statedb": ^4.1.0
+ "@jupyterlab/statusbar": ^4.1.0
+ "@jupyterlab/translation": ^4.1.0
+ "@jupyterlab/ui-components": ^4.1.0
"@lumino/algorithm": ^2.0.1
"@lumino/commands": ^2.2.0
"@lumino/coreutils": ^2.1.2
@@ -2429,7 +2428,7 @@ __metadata:
"@types/react": ^18.0.26
react: ^18.2.0
sanitize-html: ~2.7.3
- checksum: 08e88b22bb4c9e5b333f32b44888ab0d7f6300bafb0b7966a40eb3f187f932ceece5a2cbf7c0ee29cbfeb9d90f954352973df96ecdddd4ad8ea89efaa67df46f
+ checksum: aec06e0e1403850676e766061d847e7cefa7225cdf48bbd2f3ab3f8356cb306646bf57dc15bcda149aa700e87850425ab8b79299d3414751a1753747ef9f15ba
languageName: node
linkType: hard
@@ -2547,19 +2546,19 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/codeeditor@npm:^4.1.0-beta.0, @jupyterlab/codeeditor@npm:^4.1.0-beta.1":
- version: 4.1.0-beta.1
- resolution: "@jupyterlab/codeeditor@npm:4.1.0-beta.1"
+"@jupyterlab/codeeditor@npm:^4.1.0":
+ version: 4.1.0
+ resolution: "@jupyterlab/codeeditor@npm:4.1.0"
dependencies:
"@codemirror/state": ^6.2.0
"@jupyter/ydoc": ^1.1.1
- "@jupyterlab/apputils": ^4.2.0-beta.1
- "@jupyterlab/coreutils": ^6.1.0-beta.1
- "@jupyterlab/nbformat": ^4.1.0-beta.1
- "@jupyterlab/observables": ^5.1.0-beta.1
- "@jupyterlab/statusbar": ^4.1.0-beta.1
- "@jupyterlab/translation": ^4.1.0-beta.1
- "@jupyterlab/ui-components": ^4.1.0-beta.1
+ "@jupyterlab/apputils": ^4.2.0
+ "@jupyterlab/coreutils": ^6.1.0
+ "@jupyterlab/nbformat": ^4.1.0
+ "@jupyterlab/observables": ^5.1.0
+ "@jupyterlab/statusbar": ^4.1.0
+ "@jupyterlab/translation": ^4.1.0
+ "@jupyterlab/ui-components": ^4.1.0
"@lumino/coreutils": ^2.1.2
"@lumino/disposable": ^2.1.2
"@lumino/dragdrop": ^2.1.4
@@ -2567,7 +2566,7 @@ __metadata:
"@lumino/signaling": ^2.1.2
"@lumino/widgets": ^2.3.1
react: ^18.2.0
- checksum: db80b904be6cf3bf38569dfe9b918978633b66ddc8df6ea48b090a6f56465b435b7750b3791c5791a85004f0eaa63a85e80320a3deb2813363d7bfed79ce2ea5
+ checksum: ae58f6cb446f98b781a956986fcb497b53f380ed86510d67b13e3086cee434423d5a03c26a130ea8d02c762cd6a6cbc62fd088c6f60f78d4bb558102e4c80ad8
languageName: node
linkType: hard
@@ -2613,9 +2612,9 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/codemirror@npm:^4.1.0-beta.0":
- version: 4.1.0-beta.1
- resolution: "@jupyterlab/codemirror@npm:4.1.0-beta.1"
+"@jupyterlab/codemirror@npm:^4.1.0":
+ version: 4.1.0
+ resolution: "@jupyterlab/codemirror@npm:4.1.0"
dependencies:
"@codemirror/autocomplete": ^6.5.1
"@codemirror/commands": ^6.2.3
@@ -2638,11 +2637,11 @@ __metadata:
"@codemirror/state": ^6.2.0
"@codemirror/view": ^6.9.6
"@jupyter/ydoc": ^1.1.1
- "@jupyterlab/codeeditor": ^4.1.0-beta.1
- "@jupyterlab/coreutils": ^6.1.0-beta.1
- "@jupyterlab/documentsearch": ^4.1.0-beta.1
- "@jupyterlab/nbformat": ^4.1.0-beta.1
- "@jupyterlab/translation": ^4.1.0-beta.1
+ "@jupyterlab/codeeditor": ^4.1.0
+ "@jupyterlab/coreutils": ^6.1.0
+ "@jupyterlab/documentsearch": ^4.1.0
+ "@jupyterlab/nbformat": ^4.1.0
+ "@jupyterlab/translation": ^4.1.0
"@lezer/common": ^1.0.2
"@lezer/generator": ^1.2.2
"@lezer/highlight": ^1.1.4
@@ -2651,27 +2650,27 @@ __metadata:
"@lumino/disposable": ^2.1.2
"@lumino/signaling": ^2.1.2
yjs: ^13.5.40
- checksum: c15e974550f2f15f6fc042977e31b98df2f292de751f45e54f026526e679144a20122a0ea7ff9780ee6cc5f10c9129c21f7b1ea5af398267a4cb042ae190b65b
+ checksum: 92fb4ebebe4b5926fbf5ba2a99f845e8879918b3a095adf99de5f8385b3168412db38ebe2f1ae1eff8f29304d2c8c1b31c3cc1ba66a9c2d16e7a69dced20a768
languageName: node
linkType: hard
-"@jupyterlab/completer@npm:4.1.0-beta.0":
- version: 4.1.0-beta.0
- resolution: "@jupyterlab/completer@npm:4.1.0-beta.0"
+"@jupyterlab/completer@npm:^4.1.0":
+ version: 4.1.0
+ resolution: "@jupyterlab/completer@npm:4.1.0"
dependencies:
"@codemirror/state": ^6.2.0
"@codemirror/view": ^6.9.6
"@jupyter/ydoc": ^1.1.1
- "@jupyterlab/apputils": ^4.2.0-beta.0
- "@jupyterlab/codeeditor": ^4.1.0-beta.0
- "@jupyterlab/codemirror": ^4.1.0-beta.0
- "@jupyterlab/coreutils": ^6.1.0-beta.0
- "@jupyterlab/rendermime": ^4.1.0-beta.0
- "@jupyterlab/services": ^7.1.0-beta.0
- "@jupyterlab/settingregistry": ^4.1.0-beta.0
- "@jupyterlab/statedb": ^4.1.0-beta.0
- "@jupyterlab/translation": ^4.1.0-beta.0
- "@jupyterlab/ui-components": ^4.1.0-beta.0
+ "@jupyterlab/apputils": ^4.2.0
+ "@jupyterlab/codeeditor": ^4.1.0
+ "@jupyterlab/codemirror": ^4.1.0
+ "@jupyterlab/coreutils": ^6.1.0
+ "@jupyterlab/rendermime": ^4.1.0
+ "@jupyterlab/services": ^7.1.0
+ "@jupyterlab/settingregistry": ^4.1.0
+ "@jupyterlab/statedb": ^4.1.0
+ "@jupyterlab/translation": ^4.1.0
+ "@jupyterlab/ui-components": ^4.1.0
"@lumino/algorithm": ^2.0.1
"@lumino/coreutils": ^2.1.2
"@lumino/disposable": ^2.1.2
@@ -2679,7 +2678,7 @@ __metadata:
"@lumino/messaging": ^2.0.1
"@lumino/signaling": ^2.1.2
"@lumino/widgets": ^2.3.1
- checksum: 542ba03197dc4abc4895cf096ac3eb572c7178ab5c787663e985b1515203a6eabf6a02ebc9eda4ea5b96380937c241ed2b35378340b4d596a74e7e34e5893fb9
+ checksum: 11c21f95722c2cce8ce91886036e381b6c43bd9b602bf37e38de2aabeab315cb6cc68bed9d12abfa75dc0cad616b4fd9748a77f81016cd739aa1ef8128964cbc
languageName: node
linkType: hard
@@ -2711,9 +2710,9 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/coreutils@npm:^6.1.0-beta.0, @jupyterlab/coreutils@npm:^6.1.0-beta.1":
- version: 6.1.0-beta.1
- resolution: "@jupyterlab/coreutils@npm:6.1.0-beta.1"
+"@jupyterlab/coreutils@npm:^6.1.0":
+ version: 6.1.0
+ resolution: "@jupyterlab/coreutils@npm:6.1.0"
dependencies:
"@lumino/coreutils": ^2.1.2
"@lumino/disposable": ^2.1.2
@@ -2721,7 +2720,7 @@ __metadata:
minimist: ~1.2.0
path-browserify: ^1.0.0
url-parse: ~1.5.4
- checksum: aeca458beb8f9f73d9ecafdbf85977c46ae472caa8d4f2914060b0a674f8b88f6af1feaee9d1228ec43138c61cf7c48bcadb8fb6f79e9797dc97a7395a579731
+ checksum: d1fdeb3fa28af76cab52c04c82b51a1f02f9cd7779dc1eecbd1177bf246d0213c4e7234bf74eb1bd1d909123988e40addbec8fd7a027c4f5448f3c968b27642c
languageName: node
linkType: hard
@@ -2791,13 +2790,13 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/documentsearch@npm:^4.1.0-beta.1":
- version: 4.1.0-beta.1
- resolution: "@jupyterlab/documentsearch@npm:4.1.0-beta.1"
+"@jupyterlab/documentsearch@npm:^4.1.0":
+ version: 4.1.0
+ resolution: "@jupyterlab/documentsearch@npm:4.1.0"
dependencies:
- "@jupyterlab/apputils": ^4.2.0-beta.1
- "@jupyterlab/translation": ^4.1.0-beta.1
- "@jupyterlab/ui-components": ^4.1.0-beta.1
+ "@jupyterlab/apputils": ^4.2.0
+ "@jupyterlab/translation": ^4.1.0
+ "@jupyterlab/ui-components": ^4.1.0
"@lumino/commands": ^2.2.0
"@lumino/coreutils": ^2.1.2
"@lumino/disposable": ^2.1.2
@@ -2806,7 +2805,7 @@ __metadata:
"@lumino/signaling": ^2.1.2
"@lumino/widgets": ^2.3.1
react: ^18.2.0
- checksum: c1071370e35014230d9da1379f112d8ce03d65736da2014d524230885a00d188533a2df19f43431e92f0dd5028a89b0f21acfd737214e70c33a4f9d2f2a1340e
+ checksum: 768b02f07c892622b126d8b8f59e4559003f3900f2cb588fba27aa87ebb1eb9a703fe99ebccc9bd8ccba2f8859ba157060b0bb5e5c5572fe9906fd7152caf536
languageName: node
linkType: hard
@@ -2903,12 +2902,12 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/nbformat@npm:^4.1.0-beta.1":
- version: 4.1.0-beta.1
- resolution: "@jupyterlab/nbformat@npm:4.1.0-beta.1"
+"@jupyterlab/nbformat@npm:^4.1.0":
+ version: 4.1.0
+ resolution: "@jupyterlab/nbformat@npm:4.1.0"
dependencies:
"@lumino/coreutils": ^2.1.2
- checksum: 5a48c52fb67657a18c78dcd2b934c273ded1e2bfec573a4a01d3ef4238beb808d4f509b96d3306c4a39df00f77da3bc74692c2ab8e41d83e60a1382a9e0cd978
+ checksum: 0f10f53d312e1ad386be0cd1db3ea8d76ac5e169a1c470465179b35c7d7bd0e55b9d450b64abe38f447dcbec71224bfe8d4115a1cdb433f986d3a91234ffd391
languageName: node
linkType: hard
@@ -2974,16 +2973,16 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/observables@npm:^5.1.0-beta.1":
- version: 5.1.0-beta.1
- resolution: "@jupyterlab/observables@npm:5.1.0-beta.1"
+"@jupyterlab/observables@npm:^5.1.0":
+ version: 5.1.0
+ resolution: "@jupyterlab/observables@npm:5.1.0"
dependencies:
"@lumino/algorithm": ^2.0.1
"@lumino/coreutils": ^2.1.2
"@lumino/disposable": ^2.1.2
"@lumino/messaging": ^2.0.1
"@lumino/signaling": ^2.1.2
- checksum: 4bdc64771692a9613351251113ca8cd28f69fac00957d500de4cbcb595999bf234c3a61d36ed074d390b7085cde5e2e4d4be59a63f55db271597b5f2f4c07675
+ checksum: 38ee528b244b06a2813874e11d2c3aa8b576f98ffdf9f77fc6c9ddf49de296b4067b4ad7f41f5eaab1de50d16fc79a31d26a34963e09c259e4332cf15c0c7bd5
languageName: node
linkType: hard
@@ -3029,13 +3028,13 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/rendermime-interfaces@npm:^3.9.0-beta.1":
- version: 3.9.0-beta.1
- resolution: "@jupyterlab/rendermime-interfaces@npm:3.9.0-beta.1"
+"@jupyterlab/rendermime-interfaces@npm:^3.9.0":
+ version: 3.9.0
+ resolution: "@jupyterlab/rendermime-interfaces@npm:3.9.0"
dependencies:
"@lumino/coreutils": ^1.11.0 || ^2.1.2
"@lumino/widgets": ^1.37.2 || ^2.3.1
- checksum: b8c6cd6af79bb80ace56da753cbfdeba0a7739ed90160fe67cf9f209ee3ee220a616a24422720e6702a2944d23e8193ff1ad6f1d881be0bf8e126e93480fd714
+ checksum: 462f5d034cd636caf9322245a50045ddaac55e05e056e7c6579e2db55088e724c8054a51a959aa284c44b108a9e0f0053707b50d6d8a9caed5825eeaf715b245
languageName: node
linkType: hard
@@ -3059,23 +3058,23 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/rendermime@npm:^4.1.0-beta.0":
- version: 4.1.0-beta.1
- resolution: "@jupyterlab/rendermime@npm:4.1.0-beta.1"
- dependencies:
- "@jupyterlab/apputils": ^4.2.0-beta.1
- "@jupyterlab/coreutils": ^6.1.0-beta.1
- "@jupyterlab/nbformat": ^4.1.0-beta.1
- "@jupyterlab/observables": ^5.1.0-beta.1
- "@jupyterlab/rendermime-interfaces": ^3.9.0-beta.1
- "@jupyterlab/services": ^7.1.0-beta.1
- "@jupyterlab/translation": ^4.1.0-beta.1
+"@jupyterlab/rendermime@npm:^4.1.0":
+ version: 4.1.0
+ resolution: "@jupyterlab/rendermime@npm:4.1.0"
+ dependencies:
+ "@jupyterlab/apputils": ^4.2.0
+ "@jupyterlab/coreutils": ^6.1.0
+ "@jupyterlab/nbformat": ^4.1.0
+ "@jupyterlab/observables": ^5.1.0
+ "@jupyterlab/rendermime-interfaces": ^3.9.0
+ "@jupyterlab/services": ^7.1.0
+ "@jupyterlab/translation": ^4.1.0
"@lumino/coreutils": ^2.1.2
"@lumino/messaging": ^2.0.1
"@lumino/signaling": ^2.1.2
"@lumino/widgets": ^2.3.1
lodash.escape: ^4.0.1
- checksum: 22f87e09f8c27d06c0f9bb72eb45284c9182411318bf976c4915aad68b6e89d3a4101580a37ee32473d59afdecc30354fb5a5baa2db622cd411241321fa69a8d
+ checksum: 52323a1d907b29f5b60c237b6e1c3085c667f9fd59e76c6dcab29076a50eb4bd39efe5f6e3e49e3dbabb6dc1f5f7820f09af74f211a76e7e7db6c7c0be8d5715
languageName: node
linkType: hard
@@ -3117,22 +3116,22 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/services@npm:^7.1.0-beta.0, @jupyterlab/services@npm:^7.1.0-beta.1":
- version: 7.1.0-beta.1
- resolution: "@jupyterlab/services@npm:7.1.0-beta.1"
+"@jupyterlab/services@npm:^7.1.0":
+ version: 7.1.0
+ resolution: "@jupyterlab/services@npm:7.1.0"
dependencies:
"@jupyter/ydoc": ^1.1.1
- "@jupyterlab/coreutils": ^6.1.0-beta.1
- "@jupyterlab/nbformat": ^4.1.0-beta.1
- "@jupyterlab/settingregistry": ^4.1.0-beta.1
- "@jupyterlab/statedb": ^4.1.0-beta.1
+ "@jupyterlab/coreutils": ^6.1.0
+ "@jupyterlab/nbformat": ^4.1.0
+ "@jupyterlab/settingregistry": ^4.1.0
+ "@jupyterlab/statedb": ^4.1.0
"@lumino/coreutils": ^2.1.2
"@lumino/disposable": ^2.1.2
"@lumino/polling": ^2.1.2
"@lumino/properties": ^2.0.1
"@lumino/signaling": ^2.1.2
ws: ^8.11.0
- checksum: 8c0728901e1e80c069aff11abe4c5716502bfb133cab5592a844d1dd6db528212344522b0a15b47aa4c2ade1da9a59d480563313b2a263f291dfb96e605ff08c
+ checksum: 4a4797746c708551a7647c43ecc4dce20dc12ea043bb2bd43ec0c20966825a5e14742258d3bcee9ae832c91030132db895dc9a81bf1596d59c08066c4fecfba5
languageName: node
linkType: hard
@@ -3174,12 +3173,12 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/settingregistry@npm:^4.1.0-beta.0, @jupyterlab/settingregistry@npm:^4.1.0-beta.1":
- version: 4.1.0-beta.1
- resolution: "@jupyterlab/settingregistry@npm:4.1.0-beta.1"
+"@jupyterlab/settingregistry@npm:^4.1.0":
+ version: 4.1.0
+ resolution: "@jupyterlab/settingregistry@npm:4.1.0"
dependencies:
- "@jupyterlab/nbformat": ^4.1.0-beta.1
- "@jupyterlab/statedb": ^4.1.0-beta.1
+ "@jupyterlab/nbformat": ^4.1.0
+ "@jupyterlab/statedb": ^4.1.0
"@lumino/commands": ^2.2.0
"@lumino/coreutils": ^2.1.2
"@lumino/disposable": ^2.1.2
@@ -3189,7 +3188,7 @@ __metadata:
json5: ^2.2.3
peerDependencies:
react: ">=16"
- checksum: c3ceb6cbf9bc061e9ad0f44d6fe06f59ed4e9f6223f7307c0c30112e20da7da4361928c0380dbdcf92fe0e533934d9c032881165d8546ce51707188696630dd3
+ checksum: 1a0c52016806ceda150168cdeae966b15afce454fe24acfd68939f3f380eaf2d4390c40e27c1475877c8e8da6b3f15a952999ebcc9d3838d5306bd24ad5b4b51
languageName: node
linkType: hard
@@ -3219,16 +3218,16 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/statedb@npm:^4.1.0-beta.0, @jupyterlab/statedb@npm:^4.1.0-beta.1":
- version: 4.1.0-beta.1
- resolution: "@jupyterlab/statedb@npm:4.1.0-beta.1"
+"@jupyterlab/statedb@npm:^4.1.0":
+ version: 4.1.0
+ resolution: "@jupyterlab/statedb@npm:4.1.0"
dependencies:
"@lumino/commands": ^2.2.0
"@lumino/coreutils": ^2.1.2
"@lumino/disposable": ^2.1.2
"@lumino/properties": ^2.0.1
"@lumino/signaling": ^2.1.2
- checksum: a4f24554c41db7c5b008d544086038a6c8d37d53cf3d6f8fa911ac28ec4380a67cbb2f2fbcdb48c0ba48adb63b11efda70bfcb90770ab24bfd80b2723a6c2c3e
+ checksum: 693d40ba6ce67b41aae2acbae027a5c637c2bfa51d7085b6faecdb1877a5e3bd43ca70f3670f88f038c49bef80e0e09899b05d330dd9010b1d578ca73b13ea17
languageName: node
linkType: hard
@@ -3264,11 +3263,11 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/statusbar@npm:^4.1.0-beta.1":
- version: 4.1.0-beta.1
- resolution: "@jupyterlab/statusbar@npm:4.1.0-beta.1"
+"@jupyterlab/statusbar@npm:^4.1.0":
+ version: 4.1.0
+ resolution: "@jupyterlab/statusbar@npm:4.1.0"
dependencies:
- "@jupyterlab/ui-components": ^4.1.0-beta.1
+ "@jupyterlab/ui-components": ^4.1.0
"@lumino/algorithm": ^2.0.1
"@lumino/coreutils": ^2.1.2
"@lumino/disposable": ^2.1.2
@@ -3276,7 +3275,7 @@ __metadata:
"@lumino/signaling": ^2.1.2
"@lumino/widgets": ^2.3.1
react: ^18.2.0
- checksum: c9b48d15e5c6bb0337d583cf0ab47393f7d7cd84dacb9797d9cbd7517bca877a333ba7a75e8d96d93e68d090c06d3d8f58589ba6c1dcb8c73233022c282c24dd
+ checksum: 309d3cb98c924c23dfef2ad91862dfa56ea133d8ae08aa7bc743c4000f15584841b39712bc8829eb09d7382d5c9e0e7b3e85c3ae1165c01597ade96702bcc055
languageName: node
linkType: hard
@@ -3366,16 +3365,16 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/translation@npm:^4.1.0-beta.0, @jupyterlab/translation@npm:^4.1.0-beta.1":
- version: 4.1.0-beta.1
- resolution: "@jupyterlab/translation@npm:4.1.0-beta.1"
+"@jupyterlab/translation@npm:^4.1.0":
+ version: 4.1.0
+ resolution: "@jupyterlab/translation@npm:4.1.0"
dependencies:
- "@jupyterlab/coreutils": ^6.1.0-beta.1
- "@jupyterlab/rendermime-interfaces": ^3.9.0-beta.1
- "@jupyterlab/services": ^7.1.0-beta.1
- "@jupyterlab/statedb": ^4.1.0-beta.1
+ "@jupyterlab/coreutils": ^6.1.0
+ "@jupyterlab/rendermime-interfaces": ^3.9.0
+ "@jupyterlab/services": ^7.1.0
+ "@jupyterlab/statedb": ^4.1.0
"@lumino/coreutils": ^2.1.2
- checksum: bc6b2d72f8124bf39865a037a462bbd8c394255dce6c8ce23b11f11d9a886019b4109cebc73969d7d70ac1651daeef58cee3ac3e982afd713c6987ddd92fee97
+ checksum: 88b7422697c1795dfcb85870cb8642cd10be6ae27a61dd1ca9f1304f06460f859202bfb6733cb744e2b4c448e8bfbf7a4793c6626cb4a18a59c80999cf1c5050
languageName: node
linkType: hard
@@ -3437,16 +3436,16 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/ui-components@npm:^4.1.0-beta.0, @jupyterlab/ui-components@npm:^4.1.0-beta.1":
- version: 4.1.0-beta.1
- resolution: "@jupyterlab/ui-components@npm:4.1.0-beta.1"
- dependencies:
- "@jupyter/react-components": ^0.13.3
- "@jupyter/web-components": ^0.13.3
- "@jupyterlab/coreutils": ^6.1.0-beta.1
- "@jupyterlab/observables": ^5.1.0-beta.1
- "@jupyterlab/rendermime-interfaces": ^3.9.0-beta.1
- "@jupyterlab/translation": ^4.1.0-beta.1
+"@jupyterlab/ui-components@npm:^4.1.0":
+ version: 4.1.0
+ resolution: "@jupyterlab/ui-components@npm:4.1.0"
+ dependencies:
+ "@jupyter/react-components": ^0.15.2
+ "@jupyter/web-components": ^0.15.2
+ "@jupyterlab/coreutils": ^6.1.0
+ "@jupyterlab/observables": ^5.1.0
+ "@jupyterlab/rendermime-interfaces": ^3.9.0
+ "@jupyterlab/translation": ^4.1.0
"@lumino/algorithm": ^2.0.1
"@lumino/commands": ^2.2.0
"@lumino/coreutils": ^2.1.2
@@ -3464,7 +3463,7 @@ __metadata:
typestyle: ^2.0.4
peerDependencies:
react: ^18.2.0
- checksum: b6fa63c3df4754083674ff957a89c9db16eee1b7e650657735d144b3218eb1a070b82f6584882e4e9fbeafd568a23390f08c2bdf68bfc5a8414d652b84bb04b8
+ checksum: 53f8eb432d7ff8890ec748c3b43fbcb67fe6cd218b771c4c334e1ddd80a13b570071f171eca4c15feebc4715427e422f833d7b8e2084bcd2605979a444e1536d
languageName: node
linkType: hard
@@ -3866,34 +3865,21 @@ __metadata:
languageName: node
linkType: hard
-"@microsoft/fast-colors@npm:^5.3.0, @microsoft/fast-colors@npm:^5.3.1":
+"@microsoft/fast-colors@npm:^5.3.1":
version: 5.3.1
resolution: "@microsoft/fast-colors@npm:5.3.1"
checksum: ff87f402faadb4b5aeee3d27762566c11807f927cd4012b8bbc7f073ca68de0e2197f95330ff5dfd7038f4b4f0e2f51b11feb64c5d570f5c598d37850a5daf60
languageName: node
linkType: hard
-"@microsoft/fast-components@npm:^2.30.6":
- version: 2.30.6
- resolution: "@microsoft/fast-components@npm:2.30.6"
- dependencies:
- "@microsoft/fast-colors": ^5.3.0
- "@microsoft/fast-element": ^1.10.1
- "@microsoft/fast-foundation": ^2.46.2
- "@microsoft/fast-web-utilities": ^5.4.1
- tslib: ^1.13.0
- checksum: 1fbf3b7c265bcbf6abcae4d2f72430f7f871104a3d8344f16667a4cc7b123698cdf2bab8b760cbed92ef761c4db350a67f570665c76b132d6996990ac93cbd4f
- languageName: node
- linkType: hard
-
-"@microsoft/fast-element@npm:^1.10.1, @microsoft/fast-element@npm:^1.12.0":
+"@microsoft/fast-element@npm:^1.12.0":
version: 1.12.0
resolution: "@microsoft/fast-element@npm:1.12.0"
checksum: bbff4e9c83106d1d74f3eeedc87bf84832429e78fee59c6a4ae8164ee4f42667503f586896bea72341b4d2c76c244a3cb0d4fd0d5d3732755f00357714dd609e
languageName: node
linkType: hard
-"@microsoft/fast-foundation@npm:^2.46.2, @microsoft/fast-foundation@npm:^2.49.0, @microsoft/fast-foundation@npm:^2.49.4":
+"@microsoft/fast-foundation@npm:^2.49.4":
version: 2.49.4
resolution: "@microsoft/fast-foundation@npm:2.49.4"
dependencies:
@@ -3905,15 +3891,27 @@ __metadata:
languageName: node
linkType: hard
-"@microsoft/fast-react-wrapper@npm:^0.3.18":
- version: 0.3.22
- resolution: "@microsoft/fast-react-wrapper@npm:0.3.22"
+"@microsoft/fast-foundation@npm:^2.49.5":
+ version: 2.49.5
+ resolution: "@microsoft/fast-foundation@npm:2.49.5"
dependencies:
"@microsoft/fast-element": ^1.12.0
- "@microsoft/fast-foundation": ^2.49.4
+ "@microsoft/fast-web-utilities": ^5.4.1
+ tabbable: ^5.2.0
+ tslib: ^1.13.0
+ checksum: 8a4729e8193ee93f780dc88fac26561b42f2636e3f0a8e89bb1dfe256f50a01a21ed1d8e4d31ce40678807dc833e25f31ba735cb5d3c247b65219aeb2560c82c
+ languageName: node
+ linkType: hard
+
+"@microsoft/fast-react-wrapper@npm:^0.3.22":
+ version: 0.3.23
+ resolution: "@microsoft/fast-react-wrapper@npm:0.3.23"
+ dependencies:
+ "@microsoft/fast-element": ^1.12.0
+ "@microsoft/fast-foundation": ^2.49.5
peerDependencies:
react: ">=16.9.0"
- checksum: 6c7c0992dbaf91b32bc53b9d7ac21c7c8a89e6f45cc1b015cea1d1f3e766184ac7cea159479e34ddd30c347291cd5939e8d55696712086187deae37687054328
+ checksum: 45885e1868916d2aa9059e99c341c97da434331d9340a57128d4218081df68b5e1107031c608db9a550d6d1c3b010d516ed4f8dc5a8a2470058da6750dcd204a
languageName: node
linkType: hard
@@ -3926,15 +3924,6 @@ __metadata:
languageName: node
linkType: hard
-"@microsoft/fast-web-utilities@npm:^6.0.0":
- version: 6.0.0
- resolution: "@microsoft/fast-web-utilities@npm:6.0.0"
- dependencies:
- exenv-es6: ^1.1.1
- checksum: b4b906dbbf626212446d5952c160b1f7e7ce72dd33087c7ed634cb2745c31767bab7d17fba0e9fc32e42984fc5bc0a9929b4f05cbbcbe52869abe3666b5bfa39
- languageName: node
- linkType: hard
-
"@mui/base@npm:5.0.0-beta.8":
version: 5.0.0-beta.8
resolution: "@mui/base@npm:5.0.0-beta.8"