Skip to content

Commit

Permalink
Update wheel path to Whisper custom export script (#15739)
Browse files Browse the repository at this point in the history
### Description
This PR updates the documentation for using the Whisper custom export
scripts via the wheel.



### Motivation and Context
The path should say
`onnxruntime.transformers.models.whisper.convert_to_onnx` instead of
`onnxruntime.transformers.models.convert_to_onnx`.
  • Loading branch information
kunal-vaishnavi committed Apr 30, 2023
1 parent 4fbc08e commit 7ae01ce
Showing 1 changed file with 3 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Export + Optimize for FP32
$ python3 convert_to_onnx.py -m openai/whisper-tiny --output whispertiny --use_external_data_format --optimize_onnx --precision fp32
# From wheel:
$ python3 -m onnxruntime.transformers.models.convert_to_onnx -m openai/whisper-tiny --output whispertiny --use_external_data_format --optimize_onnx --precision fp32
$ python3 -m onnxruntime.transformers.models.whisper.convert_to_onnx -m openai/whisper-tiny --output whispertiny --use_external_data_format --optimize_onnx --precision fp32
```

Export + Optimize for FP16 and GPU
Expand All @@ -35,7 +35,7 @@ Export + Optimize for FP16 and GPU
$ python3 convert_to_onnx.py -m openai/whisper-tiny --output whispertiny --use_external_data_format --optimize_onnx --precision fp16 --use_gpu
# From wheel:
$ python3 -m onnxruntime.transformers.models.convert_to_onnx -m openai/whisper-tiny --output whispertiny --use_external_data_format --optimize_onnx --precision fp16 --use_gpu
$ python3 -m onnxruntime.transformers.models.whisper.convert_to_onnx -m openai/whisper-tiny --output whispertiny --use_external_data_format --optimize_onnx --precision fp16 --use_gpu
```

Export + Quantize for INT8
Expand All @@ -44,5 +44,5 @@ Export + Quantize for INT8
$ python3 convert_to_onnx.py -m openai/whisper-tiny --output whispertiny --use_external_data_format --precision int8 --quantize_embedding_layer
# From wheel:
$ python3 -m onnxruntime.transformers.models.convert_to_onnx -m openai/whisper-tiny --output whispertiny --use_external_data_format --precision int8 --quantize_embedding_layer
$ python3 -m onnxruntime.transformers.models.whisper.convert_to_onnx -m openai/whisper-tiny --output whispertiny --use_external_data_format --precision int8 --quantize_embedding_layer
```

0 comments on commit 7ae01ce

Please sign in to comment.