Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] ValueError: Tokenizer class Qwen2Tokenizer does not exist or is not currently imported. #1903

Closed
2 tasks done
zhyncs opened this issue Jul 3, 2024 · 6 comments
Closed
2 tasks done

Comments

@zhyncs
Copy link
Collaborator

zhyncs commented Jul 3, 2024

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.

Describe the bug

python3 -m lmdeploy serve api_server Qwen2-72B-Instruct --tp 8

ref QwenLM/Qwen2#34 (comment)

Maybe we should restrict the transformers>=4.37.0. Do you have any suggestions? Thanks. @grimoire @RunningLeon @lvhan028

transformers

Reproduction

transformers < 4.37.0

Environment

as mentioned above

Error traceback

No response

@grimoire
Copy link
Collaborator

grimoire commented Jul 3, 2024

pytorch engine will perform the version check. Guess we should add checks in our tokenizer wrap too.

@zhyncs
Copy link
Collaborator Author

zhyncs commented Jul 3, 2024

pytorch engine will perform the version check. Guess we should add checks in our tokenizer wrap too.

ok!

@zhyncs
Copy link
Collaborator Author

zhyncs commented Jul 3, 2024

Hi @grimoire Usually, when a LLM is released now, it is generally first submitted PR to transformers for support, and then the weights are released on Hugging Face. This means that it is actually possible to know from which version of transformers the model started to be supported. Should we consider maintaining something like a config.json, or should we handle issues as they arise? For example, in the case of qwen 2, check the tokenizer version so that the transformers version needs to be greater than a specific version.

@grimoire
Copy link
Collaborator

grimoire commented Jul 3, 2024

In my env-check, I try to load the config and transformers will do all other checks for me, including the version check.

try:
from transformers import AutoConfig
config = AutoConfig.from_pretrained(
model_path, trust_remote_code=trust_remote_code)
except Exception as e:
message = (
f'Load model config with transformers=={trans_version}'
' failed. '
'Please make sure model can be loaded with transformers API.')
_handle_exception(e, 'transformers', logger, message=message)

That should be enough unless the model fills a wrong version in the config.

@zhyncs
Copy link
Collaborator Author

zhyncs commented Jul 3, 2024

make sense

@zhyncs
Copy link
Collaborator Author

zhyncs commented Jul 3, 2024

I'll land the version check code for TurboMind. Stay tuned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants