Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorFlow Lite model deployment results #611

Closed
1 task done
jamesleech89 opened this issue Mar 14, 2024 · 3 comments
Closed
1 task done

TensorFlow Lite model deployment results #611

jamesleech89 opened this issue Mar 14, 2024 · 3 comments
Labels
question A HUB question that does not involve a bug Stale

Comments

@jamesleech89
Copy link

Search before asking

Question

I have a new YOLO v5 object detection model trained on some new images with two classes. The model performs well in the Ultralytics preview tab and when I call the API for the model. I have exported a version of the model in a tensorflow lite format and applied it to some of the same images that I passed to the preview tab and through the API. Using the same confidence threshold and applying a non-maximum suppression using the same IoU threshold gives me different results to that which I get through the API and the preview tab? Even if I remove the non-maximum suppresion application I have on top of the tensorflow lite model, I can clearly see that the predictions are different (i.e. additional boxes predicted for classes that are not at all predicted through the API/preview tab).

Additional

No response

@jamesleech89 jamesleech89 added the question A HUB question that does not involve a bug label Mar 14, 2024
Copy link

👋 Hello @jamesleech89, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:

  • Quickstart. Start training and deploying YOLO models with HUB in seconds.
  • Datasets: Preparing and Uploading. Learn how to prepare and upload your datasets to HUB in YOLO format.
  • Projects: Creating and Managing. Group your models into projects for improved organization.
  • Models: Training and Exporting. Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
  • Integrations. Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
  • Ultralytics HUB App. Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
    • iOS. Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
    • Android. Explore TFLite acceleration on mobile devices.
  • Inference API. Understand how to use the Inference API for running your trained models in the cloud to generate predictions.

If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix.

If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.

We try to respond to all issues as promptly as possible. Thank you for your patience!

@UltralyticsAssistant
Copy link
Member

@jamesleech89 hello! It sounds like you're experiencing discrepancies between your model's predictions in TensorFlow Lite format compared to its performance via the Ultralytics API and preview tab. This can happen due to several reasons:

  1. Quantization: If your TensorFlow Lite model is quantized, it might introduce slight precision losses, affecting the model's predictions. This is a common trade-off for the reduced model size and faster inference times on edge devices.

  2. Preprocessing and Postprocessing: Ensure that the image preprocessing (resizing, normalization) and postprocessing (applying confidence thresholds and non-maximum suppression) steps are consistent across all platforms. Differences in these steps can lead to varied results.

  3. Model Version: Double-check that the TensorFlow Lite model is exported from the exact same model version and weights as the one used in the Ultralytics preview tab and API.

  4. Framework Differences: Sometimes, subtle differences in how frameworks handle operations can lead to discrepancies. TensorFlow Lite might handle certain operations differently than PyTorch, which Ultralytics models are originally implemented in.

For a detailed guide on exporting models and ensuring consistency across different platforms, please refer to the Ultralytics HUB Docs. If the issue persists, consider providing more details about the preprocessing and postprocessing steps, along with any specific settings used during the TensorFlow Lite model export. This will help in diagnosing the issue more accurately. 😊

Copy link

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label Apr 14, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question A HUB question that does not involve a bug Stale
Projects
None yet
Development

No branches or pull requests

2 participants