Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensorflow lite model with fp16& int8 optimisations detections accuracy in android comparison with preview feature of HUB #612

Closed
1 task done
vipin-prabhakaran opened this issue Mar 16, 2024 · 2 comments
Labels
question A HUB question that does not involve a bug Stale

Comments

@vipin-prabhakaran
Copy link

Search before asking

Question

I am using preview feature to test my model ,and everything is working fine ,but when I am using Tensorflow lite model with fp16& int8 optimisations ,its not giving that much accuracy as much as I am getting with the same model on my preview tab

can anyone pls help out to figure out the root cause to get in to the bottom of the issue,SO that I can make most use of my model

Additional

No response

@vipin-prabhakaran vipin-prabhakaran added the question A HUB question that does not involve a bug label Mar 16, 2024
@UltralyticsAssistant
Copy link
Member

@vipin-prabhakaran hello! Thanks for reaching out with your question. 🌟

When you're seeing a difference in accuracy between the model in the preview feature of Ultralytics HUB and the TensorFlow Lite model with FP16 & INT8 optimizations, it typically stems from the precision reduction inherent to these optimizations. Both FP16 (floating-point 16-bit) and INT8 (integer 8-bit) quantizations reduce the model size and computation requirements, making them highly beneficial for mobile devices like Android.

However, this benefit comes at a potential cost to accuracy and performance, as the model's weights are approximated to fit into a smaller, less precise numerical format. This is likely why you're observing a discrepancy in detection accuracy compared to the preview feature, which uses the full precision model.

To mitigate this issue, you might want to:

  1. Experiment with only one type of optimization at a time to see which one impacts your accuracy the least.
  2. Consider using more robust data for training, which can sometimes help in retaining accuracy post-optimization.

For specific instructions on how to optimize models while aiming to maintain high accuracy, please refer to our documentation at https://docs.ultralytics.com/hub.

Your journey to optimize models for mobile deployment is crucial, and although some trial and error might be inevitable, the gains in making your models portable and efficient are immense. Keep up the good work! 🚀

Copy link

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label Apr 16, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question A HUB question that does not involve a bug Stale
Projects
None yet
Development

No branches or pull requests

2 participants