-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
openai.error.InvalidRequestError, requested too many tokens #14
Comments
Message that will be displayed on users' first issue |
Sorry about that it seems your prompt got very long. Are you using the latest version terminal-copilot 1.0.7? Could you run with the -v flag to see the prompt that gets sent to openai to help us debug? |
I ran into this problem when using copilot in a directory that contained a lot of hidden files. Once I deleted the files the problem went away. |
Yes |
Implemented in #44 |
Worked great the other day, now I'm getting this, no matter the command:
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 6365 tokens (6109 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.
The text was updated successfully, but these errors were encountered: