We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好,请问什么时候会支持对CogVLM2的量化,模型来自zhipu的https://huggingface.co/THUDM/cogvlm2-llama3-chat-19B,能否用https://github.com/InternLM/lmdeploy/blob/main/docs/en/quantization/w4a16.md来辅助进行量化,谢谢!
The text was updated successfully, but these errors were encountered:
0.5.0 has supported cogvlm2. May give it a try.
Sorry, something went wrong.
@AllentDan @grimoire
My mistake. 0.5.0 support cogvlm2 but hasn't support its quantization yet.
grimoire
No branches or pull requests
您好,请问什么时候会支持对CogVLM2的量化,模型来自zhipu的https://huggingface.co/THUDM/cogvlm2-llama3-chat-19B,能否用https://github.com/InternLM/lmdeploy/blob/main/docs/en/quantization/w4a16.md来辅助进行量化,谢谢!
The text was updated successfully, but these errors were encountered: