-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] ValueError: Tokenizer class Qwen2Tokenizer does not exist or is not currently imported. #1903
Comments
pytorch engine will perform the version check. Guess we should add checks in our tokenizer wrap too. |
ok! |
Hi @grimoire Usually, when a LLM is released now, it is generally first submitted PR to transformers for support, and then the weights are released on Hugging Face. This means that it is actually possible to know from which version of |
In my env-check, I try to load the config and transformers will do all other checks for me, including the version check. lmdeploy/lmdeploy/pytorch/check_env/__init__.py Lines 118 to 127 in c9c225f
That should be enough unless the model fills a wrong version in the config. |
make sense |
I'll land the version check code for TurboMind. Stay tuned. |
Checklist
Describe the bug
ref QwenLM/Qwen2#34 (comment)
Maybe we should restrict the
transformers>=4.37.0
. Do you have any suggestions? Thanks. @grimoire @RunningLeon @lvhan028lmdeploy/requirements/runtime.txt
Line 18 in c9c225f
Reproduction
transformers < 4.37.0
Environment
Error traceback
No response
The text was updated successfully, but these errors were encountered: