-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump up transformers version & Remove MistralConfig #1254
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR LGTM! However, I think we need to make sure that the tokenizer performance is good before we merge this PR.
Hey i built this from source and there is an issue with the openai server:
this worked for me into requirements.txt
curl -X 'POST' \
'http://0.0.0.0:8000/v1/chat/completions' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "mistralai/Mistral-7B-Instruct-v0.1",
"messages": [{"role": "user", "content": "what is the capital of Germany?"}],
"temperature": 0.7,
"top_p": 1,
"n": 1,
"max_tokens": 20
}'
This can be fixed via
|
@zhuohan123 I think the performance issue is orthogonal to the PR, since new users will install the newest version of transformers and experience the performance issue anyway. |
Now that MistralConfig is officially supported by the stable release of HF transformers, we can remove our
MistralConfig
.