-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
能否支持glm-4v-9b模型 #1916
Comments
@liyuan1208 hi, glm-4v-9b will be supported by lmdeploy's pytorch engine. Will update once the pr is created. |
大概要多久可以上线,期待😚 |
个人感觉哈,如果比vllm先支持会吸引一大波用户 。。。 |
@danxuan2022 hi, you could try this PR #1947 |
👍 👍 这就去试一下~ |
刚试了一下哈,通过modelscope下载的glm-4v-9b部署是正常的哈~ 但是glm-4v-9b模型微调后再使用lmdeploy会报错,辛苦帮忙看下 微调后的模型merge lora 将转换后的模型进行部署 报错信息很奇怪,看起来像是环境的问题,但是我在相同环境下部署modelscope下载的glm-4v-9b是正常的,所以我猜测大概率不是环境问题,报错信息如下 2024-07-15 16:44:07,336 - lmdeploy - WARNING - Try to run with pytorch engine because |
@danxuan2022 看起来像是没有安装好环境,triton安装有问题,你重新安装 triton==2.1.0试试 |
Motivation
尝试适配glm-4v-9b模型(其实是13.9b,视觉部分有4.9B),发现glm4v里面对输入的position_ids做了特殊处理:
new_input_embeds.append(torch.cat(
(inputs_embeds[i, :boi_token_pos], images_features[i], inputs_embeds[i, eoi_token_pos + 1:])))
new_position_ids.append(torch.cat(
(position_ids[i, :boi_token_pos + 1], position_ids[i, boi_token_pos + 1].repeat(num_patches),
position_ids[i, eoi_token_pos:])
))
其将视觉特征部分的position_ids统一设定为同一个值,在计算RoPE时会用到。
turbomind引擎好像没有修改position_ids的接口。我们对glm-4v在我们的场景里全参数微调后效果是开源模型里面最好的,希望官方能支持glm-4v-9b模型
Related resources
GLM-4模型链接:
https://github.com/THUDM/GLM-4
Additional context
No response
The text was updated successfully, but these errors were encountered: