Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] qwen 2 issue when transformers>4.41.2 for PyTorch Engine #1885

Closed
2 tasks done
zhyncs opened this issue Jun 29, 2024 · 4 comments · Fixed by #1886
Closed
2 tasks done

[Bug] qwen 2 issue when transformers>4.41.2 for PyTorch Engine #1885

zhyncs opened this issue Jun 29, 2024 · 4 comments · Fixed by #1886

Comments

@zhyncs
Copy link
Collaborator

zhyncs commented Jun 29, 2024

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.

Describe the bug

After version 4.41.2 https://pypi.org/project/transformers/4.41.2/#history, Transformers added the cache_position parameter to qwen 2. https://github.com/huggingface/transformers/blob/e65502951593a76844e872fee9c56b805598538a/src/transformers/models/qwen2/modeling_qwen2.py#L728-L771

Currently, the PyTorch Engine in LMDeploy has not been adapted, so the TypeError issue occurs.

TypeError: PatchedQwen2Attention.forward() got an unexpected keyword argument 'cache_position'

Currently, LMDeploy does not restrict the version of transformers. If a new version is released, the above problems will occur.

transformers

There are currently two ways to fix this issue: one is to restrict transformers<=4.41.2, and the other is to adapt the cache_position parameter.

Do you have any suggestions? Currently, this issue seems very serious and needs to be fixed before the release of v0.5.0.

@lvhan028 @grimoire @RunningLeon @AllentDan

Reproduction

upgrade transformers>4.41.2

Environment

as described above

Error traceback

No response

@zhyncs
Copy link
Collaborator Author

zhyncs commented Jun 29, 2024

I'll land the code to fix this.

@Volta-lemon
Copy link

I had this problem and I saw you solved it. Now do I just need to re-execute pip install lmdeploy[all]==0.4.2?

@Volta-lemon
Copy link

exec:pip install lmdeploy[all]==0.4.2 --force-reinstall --no-deps

@zhyncs
Copy link
Collaborator Author

zhyncs commented Jul 1, 2024

exec:pip install lmdeploy[all]==0.4.2 --force-reinstall --no-deps

https://github.com/InternLM/lmdeploy/releases/tag/v0.5.0

v0.5.0 is a new release, you may try it. Cheers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants