You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the HUB issues and discussions and found no similar questions.
Question
I am using preview feature to test my model ,and everything is working fine ,but when I am using Tensorflow lite model with fp16& int8 optimisations ,its not giving that much accuracy as much as I am getting with the same model on my preview tab
can anyone pls help out to figure out the root cause to get in to the bottom of the issue,SO that I can make most use of my model
Additional
No response
The text was updated successfully, but these errors were encountered:
When you're seeing a difference in accuracy between the model in the preview feature of Ultralytics HUB and the TensorFlow Lite model with FP16 & INT8 optimizations, it typically stems from the precision reduction inherent to these optimizations. Both FP16 (floating-point 16-bit) and INT8 (integer 8-bit) quantizations reduce the model size and computation requirements, making them highly beneficial for mobile devices like Android.
However, this benefit comes at a potential cost to accuracy and performance, as the model's weights are approximated to fit into a smaller, less precise numerical format. This is likely why you're observing a discrepancy in detection accuracy compared to the preview feature, which uses the full precision model.
To mitigate this issue, you might want to:
Experiment with only one type of optimization at a time to see which one impacts your accuracy the least.
Consider using more robust data for training, which can sometimes help in retaining accuracy post-optimization.
For specific instructions on how to optimize models while aiming to maintain high accuracy, please refer to our documentation at https://docs.ultralytics.com/hub.
Your journey to optimize models for mobile deployment is crucial, and although some trial and error might be inevitable, the gains in making your models portable and efficient are immense. Keep up the good work! 🚀
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
Question
I am using preview feature to test my model ,and everything is working fine ,but when I am using Tensorflow lite model with fp16& int8 optimisations ,its not giving that much accuracy as much as I am getting with the same model on my preview tab
can anyone pls help out to figure out the root cause to get in to the bottom of the issue,SO that I can make most use of my model
Additional
No response
The text was updated successfully, but these errors were encountered: