-
Notifications
You must be signed in to change notification settings - Fork 829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chat completion stream returns empty total usage #506
Comments
I can confirm that this issue still exist. @JosePina98 did you find any workaround on this? |
For now I have created this Typescript function to estimate the tokens used in each call. According to the tests I've done, it makes a rough estimate, but on the high end. const tokenEstimation = (prompt: string, output: string): {
prompt_tokens: number,
completion_tokens: number,
total_tokens: number
} => {
const tokenLength = 3.8;
const promptTokens = Math.ceil(prompt.length / tokenLength);
const completionTokens = Math.ceil(output.length / tokenLength);
return {
prompt_tokens: promptTokens,
completion_tokens: completionTokens,
total_tokens: promptTokens + completionTokens
}
} |
I do kinda same. Using the tokenizer package since the package doesn't support new models I hard coded 'gpt-4' as model. I assume that they are all using // Calculate Token Sizes
const inputTokenSize = tokenizer.encodeChat(messages, "gpt-4").length;
const outputTokenSize = tokenizer.encodeChat([{role: "assistant", content: output,},], "gpt-4").length; |
Sorry about this, It was arguably a mistake to include |
Hitting this issue as well. I see this issue's status is Closed, but I couldn't see any new release with the fix. |
cc @athyuttamre |
Hi folks, we pre-emptively added the |
Glad to hear it! Would it be possible to re-open the ticket to make it clearer when this has been resolved? |
Hi team, any ETA when this ticket will be resolved? |
following this for updates P.S. thanks for the |
When are you going to put this issue on to resolution? |
I am really looking forward to usage support on stream mode. |
The Python one has been shipped. Is there an estimate for the node? |
I'm using version const stream = this.openAi.beta.chat.completions.stream({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Say something funny' }],
stream: true,
max_tokens: 5,
stream_options: {
include_usage: true
}
}); When using it like this, you are able to either use the { completion_tokens: 5, prompt_tokens: 10, total_tokens: 15 } Not sure if this feature was added silently, or nobody noticed? |
Thank you for the information. https://platform.openai.com/docs/changelog/may-6th-2024 Now I added usage fee notation function on my app. It works perfect. @JosePina98 |
yes
要买直升机的Mickey
***@***.***
…------------------ 原始邮件 ------------------
发件人: "openai/openai-node" ***@***.***>;
发送时间: 2024年7月22日(星期一) 中午1:52
***@***.***>;
***@***.******@***.***>;
主题: Re: [openai/openai-node] Chat completion stream returns empty total usage (Issue #506)
It seems the usage option on streaming was supported on May 6th.
Here is a release note.
https://platform.openai.com/docs/changelog/may-6th-2024
image.png (view on web)
Can we close this issue?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Confirm this is a Node library issue and not an underlying OpenAI API issue
Describe the bug
Using openai.beta.chat.completions.stream() and then calling totalUsage function returns the object with all values set to zero.
To Reproduce
Code snippets
OS
Ubuntu
Node version
Node v16.13.0
Library version
open v4.19.0
The text was updated successfully, but these errors were encountered: