We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimum 1.16.2 onnx 1.15.0 onnxruntime 1.17.0
No response
examples
I got onnx files of mt5-large model and when I tried to convert that to quantized files, I got errors at decoder_model_merged.onnx.
from optimum.onnxruntime.configuration import AutoQuantizationConfig from optimum.onnxruntime import ORTQuantizer
from optimum.onnxruntime.configuration import AutoQuantizationConfig
from optimum.onnxruntime import ORTQuantizer
qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False) quantizer = ORTQuantizer.from_pretrained("/content/drive/MyDrive/mt5-large", file_name="decoder_model_merged.onnx")
qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False)
quantizer = ORTQuantizer.from_pretrained("/content/drive/MyDrive/mt5-large", file_name="decoder_model_merged.onnx")
quantizer.quantize(save_dir="/content/drive/MyDrive/mt5-large", quantization_config=qconfig)
decoder_model_merged_quantized.onnx
The text was updated successfully, but these errors were encountered:
Hi @jhpassion0621 this is a bug introduced in onnxruntime 1.17, please downgrade to onnxruntime 1.16 to wait for a fix to be released: microsoft/onnxruntime#19421
Sorry, something went wrong.
Thank you very much. it works well on 1.16. @fxmarty
No branches or pull requests
System Info
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction (minimal, reproducible, runnable)
I got onnx files of mt5-large model and when I tried to convert that to quantized files, I got errors at decoder_model_merged.onnx.
from optimum.onnxruntime.configuration import AutoQuantizationConfig
from optimum.onnxruntime import ORTQuantizer
qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False)
quantizer = ORTQuantizer.from_pretrained("/content/drive/MyDrive/mt5-large", file_name="decoder_model_merged.onnx")
quantizer.quantize(save_dir="/content/drive/MyDrive/mt5-large", quantization_config=qconfig)
Expected behavior
decoder_model_merged_quantized.onnx
The text was updated successfully, but these errors were encountered: