We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After version 4.41.2 https://pypi.org/project/transformers/4.41.2/#history, Transformers added the cache_position parameter to qwen 2. https://github.com/huggingface/transformers/blob/e65502951593a76844e872fee9c56b805598538a/src/transformers/models/qwen2/modeling_qwen2.py#L728-L771
cache_position
Currently, the PyTorch Engine in LMDeploy has not been adapted, so the TypeError issue occurs.
TypeError: PatchedQwen2Attention.forward() got an unexpected keyword argument 'cache_position'
Currently, LMDeploy does not restrict the version of transformers. If a new version is released, the above problems will occur.
transformers
lmdeploy/requirements/runtime.txt
Line 18 in e820b56
There are currently two ways to fix this issue: one is to restrict transformers<=4.41.2, and the other is to adapt the cache_position parameter.
transformers<=4.41.2
Do you have any suggestions? Currently, this issue seems very serious and needs to be fixed before the release of v0.5.0.
@lvhan028 @grimoire @RunningLeon @AllentDan
upgrade transformers>4.41.2
as described above
No response
The text was updated successfully, but these errors were encountered:
I'll land the code to fix this.
Sorry, something went wrong.
I had this problem and I saw you solved it. Now do I just need to re-execute pip install lmdeploy[all]==0.4.2?
pip install lmdeploy[all]==0.4.2
exec:pip install lmdeploy[all]==0.4.2 --force-reinstall --no-deps
https://github.com/InternLM/lmdeploy/releases/tag/v0.5.0
v0.5.0 is a new release, you may try it. Cheers.
Successfully merging a pull request may close this issue.
Checklist
Describe the bug
After version 4.41.2 https://pypi.org/project/transformers/4.41.2/#history, Transformers added the
cache_position
parameter to qwen 2. https://github.com/huggingface/transformers/blob/e65502951593a76844e872fee9c56b805598538a/src/transformers/models/qwen2/modeling_qwen2.py#L728-L771Currently, the PyTorch Engine in LMDeploy has not been adapted, so the TypeError issue occurs.
Currently, LMDeploy does not restrict the version of
transformers
. If a new version is released, the above problems will occur.lmdeploy/requirements/runtime.txt
Line 18 in e820b56
There are currently two ways to fix this issue: one is to restrict
transformers<=4.41.2
, and the other is to adapt thecache_position
parameter.Do you have any suggestions? Currently, this issue seems very serious and needs to be fixed before the release of v0.5.0.
@lvhan028 @grimoire @RunningLeon @AllentDan
Reproduction
upgrade transformers>4.41.2
Environment
Error traceback
No response
The text was updated successfully, but these errors were encountered: