-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
在网页版的demo中推理仅仅只能进行语言模型的对话,没有多模态的推理 #461
Comments
不对,应该是yi-vl-6B没有图片的输入也无法进行chat,网页版推理仅对纯语言模型有效 |
是的, 网页端暂时不支持多模态模型的推理, 可以使用 |
好的谢谢,那能不能通过一个循环,将一个json文件中的关键字段全部提取出来,然后json中的每个数据都推理一遍 |
类似于这样(这是Qwen-VL的批量预测的代码):
def parse_text(text): def predict(message): model, tokenizer = _load_model_tokenizer() if name == 'main':
|
此前的1.5.4版本可以加载qwen-vl-chat,图片的推理也可以用“<img>xxx.jpg</img>“来进行输入,结果到了1.7.0这个也不支持了,请问这个能恢复吗? |
例如:
我已经微调好了Yi-VL-6B,想使用网页版推理进行对话,但是推理页面没有专门的图片输入选项,只能进行chat,希望可以改进一下
The text was updated successfully, but these errors were encountered: