Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

%%ai chatgpt seems not using the api base url? #574

Open
butterl opened this issue Jan 11, 2024 · 4 comments
Open

%%ai chatgpt seems not using the api base url? #574

butterl opened this issue Jan 11, 2024 · 4 comments
Labels
bug Something isn't working @jupyter-ai/magics

Comments

@butterl
Copy link

butterl commented Jan 11, 2024

Description

%%ai chatgpt seems not using the api base url

Reproduce

set base URL to openai api compatiable e.g. https://api.xxx.com/v1

Expected behavior

it always return auth error but the URL+key works with other tools
so it's seems the base URL setting may not working,the restful return is openai info not from xxx.com error info

AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-tfr6g***************************************96cA. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

Context

---------------------------------------------------------------------------
AuthenticationError                       Traceback (most recent call last)
Cell In[2], line 1
----> 1 get_ipython().run_cell_magic('ai', 'chatgpt', 'hello\n')

File ~/.local/lib/python3.10/site-packages/IPython/core/interactiveshell.py:2430, in InteractiveShell.run_cell_magic(self, magic_name, line, cell)
   2428 with self.builtin_trap:
   2429     args = (magic_arg_s, cell)
-> 2430     result = fn(*args, **kwargs)
   2432 # The code below prevents the output from being displayed
   2433 # when using magics with decodator @output_can_be_silenced
   2434 # when the last Python token in the expression is a ';'.
   2435 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):

File ~/.local/lib/python3.10/site-packages/jupyter_ai_magics/magics.py:607, in AiMagics.ai(self, line, cell)
    604 ip = get_ipython()
    605 prompt = prompt.format_map(FormatDict(ip.user_ns))
--> 607 return self.run_ai_cell(args, prompt)

File ~/.local/lib/python3.10/site-packages/jupyter_ai_magics/magics.py:550, in AiMagics.run_ai_cell(self, args, prompt)
    547 prompt = prompt.format_map(FormatDict(ip.user_ns))
    549 if provider.is_chat_provider:
--> 550     result = provider.generate([[HumanMessage(content=prompt)]])
    551 else:
    552     # generate output from model via provider
    553     result = provider.generate([prompt])

File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:382, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    380         if run_managers:
    381             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 382         raise e
    383 flattened_outputs = [
    384     LLMResult(generations=[res.generations], llm_output=res.llm_output)
    385     for res in results
    386 ]
    387 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:372, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    369 for i, m in enumerate(messages):
    370     try:
    371         results.append(
--> 372             self._generate_with_cache(
    373                 m,
    374                 stop=stop,
    375                 run_manager=run_managers[i] if run_managers else None,
    376                 **kwargs,
    377             )
    378         )
    379     except BaseException as e:
    380         if run_managers:

File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:528, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    524     raise ValueError(
    525         "Asked to cache, but no cache found at `langchain.cache`."
    526     )
    527 if new_arg_supported:
--> 528     return self._generate(
    529         messages, stop=stop, run_manager=run_manager, **kwargs
    530     )
    531 else:
    532     return self._generate(messages, stop=stop, **kwargs)

File ~/.local/lib/python3.10/site-packages/langchain_community/chat_models/openai.py:435, in ChatOpenAI._generate(self, messages, stop, run_manager, stream, **kwargs)
    429 message_dicts, params = self._create_message_dicts(messages, stop)
    430 params = {
    431     **params,
    432     **({"stream": stream} if stream is not None else {}),
    433     **kwargs,
    434 }
--> 435 response = self.completion_with_retry(
    436     messages=message_dicts, run_manager=run_manager, **params
    437 )
    438 return self._create_chat_result(response)

File ~/.local/lib/python3.10/site-packages/langchain_community/chat_models/openai.py:352, in ChatOpenAI.completion_with_retry(self, run_manager, **kwargs)
    350 """Use tenacity to retry the completion call."""
    351 if is_openai_v1():
--> 352     return self.client.create(**kwargs)
    354 retry_decorator = _create_retry_decorator(self, run_manager=run_manager)
    356 @retry_decorator
    357 def _completion_with_retry(**kwargs: Any) -> Any:

File ~/.local/lib/python3.10/site-packages/openai/_utils/_utils.py:270, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
    268             msg = f"Missing required argument: {quote(missing[0])}"
    269     raise TypeError(msg)
--> 270 return func(*args, **kwargs)

File ~/.local/lib/python3.10/site-packages/openai/resources/chat/completions.py:645, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
    596 @required_args(["messages", "model"], ["messages", "model", "stream"])
    597 def create(
    598     self,
   (...)
    643     timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
    644 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 645     return self._post(
    646         "/chat/completions",
    647         body=maybe_transform(
    648             {
    649                 "messages": messages,
    650                 "model": model,
    651                 "frequency_penalty": frequency_penalty,
    652                 "function_call": function_call,
    653                 "functions": functions,
    654                 "logit_bias": logit_bias,
    655                 "logprobs": logprobs,
    656                 "max_tokens": max_tokens,
    657                 "n": n,
    658                 "presence_penalty": presence_penalty,
    659                 "response_format": response_format,
    660                 "seed": seed,
    661                 "stop": stop,
    662                 "stream": stream,
    663                 "temperature": temperature,
    664                 "tool_choice": tool_choice,
    665                 "tools": tools,
    666                 "top_logprobs": top_logprobs,
    667                 "top_p": top_p,
    668                 "user": user,
    669             },
    670             completion_create_params.CompletionCreateParams,
    671         ),
    672         options=make_request_options(
    673             extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
    674         ),
    675         cast_to=ChatCompletion,
    676         stream=stream or False,
    677         stream_cls=Stream[ChatCompletionChunk],
    678     )

File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:1088, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
   1074 def post(
   1075     self,
   1076     path: str,
   (...)
   1083     stream_cls: type[_StreamT] | None = None,
   1084 ) -> ResponseT | _StreamT:
   1085     opts = FinalRequestOptions.construct(
   1086         method="post", url=path, json_data=body, files=to_httpx_files(files), **options
   1087     )
-> 1088     return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:853, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    844 def request(
    845     self,
    846     cast_to: Type[ResponseT],
   (...)
    851     stream_cls: type[_StreamT] | None = None,
    852 ) -> ResponseT | _StreamT:
--> 853     return self._request(
    854         cast_to=cast_to,
    855         options=options,
    856         stream=stream,
    857         stream_cls=stream_cls,
    858         remaining_retries=remaining_retries,
    859     )

File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:930, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
    927     if not err.response.is_closed:
    928         err.response.read()
--> 930     raise self._make_status_error_from_response(err.response) from None
    932 return self._process_response(
    933     cast_to=cast_to,
    934     options=options,
   (...)
    937     stream_cls=stream_cls,
    938 )

AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-tfr6g***************************************96cA. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
@butterl butterl added the bug Something isn't working label Jan 11, 2024
Copy link

welcome bot commented Jan 11, 2024

Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗

If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
welcome
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! 👋

Welcome to the Jupyter community! 🎉

@JasonWeill
Copy link
Collaborator

@butterl Thanks for opening this issue! Which versions of JupyterLab and Jupyter AI are you using? If you're using version 1.9.0 or 2.9.0 of Jupyter AI, I'm wondering if this is related to the openai version bump (#551) and possibly also to another issue with using OpenAI APIs by proxy (#464).

@butterl
Copy link
Author

butterl commented Jan 12, 2024

jupyter-lab version 4.0.10

I have tried and found some clue, the setting in plugin seems not work for all the settings

I use this setting in bashrc to fix the issue
export OPENAI_API_KEY="sk-xxxx"
export OPENAI_BASE_URL="https://api.xxxx.com/v1"

Also this setting after %reload_ext jupyter_ai_magics works
%env OPENAI_API_KEY=sk-xxxx
%env OPENAI_BASE_URL=https://api.xxxx.com/v1

but the " Language model" part seems work, not sure why this happen

I use the plugin after a proxy, the %%ai magic works after set global env ,but the chat window returns SSL verify failed with a self signed cert. the network part seems differs with %%ai maigic and the chat window?

@JasonWeill
Copy link
Collaborator

Relating to #293, which concerns sharing API key credentials across the magic commands and chat UI. It seems like we should also be sharing the proxy settings for providers that support them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working @jupyter-ai/magics
Projects
None yet
Development

No branches or pull requests

2 participants