Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Converting a custom model to mlmodel is 100% no matter what object it recognizes #51

Closed
1 task done
ys-ocean opened this issue Jun 29, 2022 · 13 comments
Closed
1 task done
Assignees
Labels
app Issue related to Ultralytics HUB App bug Something isn't working todo Further action is needed by Ultralytics

Comments

@ys-ocean
Copy link

ys-ocean commented Jun 29, 2022

Search before asking

  • I have searched the HUB issues and found no similar bug report.

HUB Component

Inference

Bug

Hi, I found a strange problem. It is normal for the customized models on the Hub to be converted into online test recognition. The models downloaded from the Ultralytics App are 100% recognized, and objects beyond the model can be recognized. The same is true for exporting Ultralytics iOS models. If you have any ideas or answers, please let me know, thank you very much!

this is datasets.
key.zip
)
image
image
image

IMG_1873

Environment

Macbook Pro,Xcode

Minimal Reproducible Example

No response

Additional

No response

@ys-ocean ys-ocean added the bug Something isn't working label Jun 29, 2022
@github-actions
Copy link

👋 Hello @ys-ocean, thank you for raising an issue about Ultralytics HUB 🚀! Please visit https://ultralytics.com/hub to learn more, and see our ⭐️ HUB Guidelines to quickly get started uploading datasets and training YOLOv5 models.

If this is a 🐛 Bug Report, please provide screenshots and steps to recreate your problem to help us get started working on a fix.

If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.

We try to respond to all issues as promptly as possible. Thank you for your patience!

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 29, 2022

@ys-ocean your training results look fine. If you're asking why the Preview is showing 88% confidence and the app 100% it's hard to tell. The app is running on a different image at a different image size with different pipelined NMS, so it's not exactly an apple to apple comparison here, in which case you should expect differences.

@ys-ocean
Copy link
Author

ys-ocean commented Jun 30, 2022

@ys-ocean你的训练结果看起来不错。如果您问为什么预览显示88%的信心和应用程序100%的信心,这很难说。该应用程序以不同的图像大小和不同的管道NMS在不同的图像上运行,因此它在这里并不完全是苹果对苹果的比较,在这种情况下,您应该期待差异。

Hi, first of all thanks for your reply, but I don't quite understand how to solve it.

Now I downloaded a public dataset Raccoon Dataset(https://public.roboflow.com/object-detection/raccoon/2/download/yolov5pytorch) from the roboflow website(https://roboflow.com/), exported the YOLOv5 Pytorch format, uploaded it to the Ultralytics Hub, selected YOLOv5n, and the others are all default parameters to regenerate a Model, or the same problem I have on Export & Test The test is basically 70%-90%, and when I login and select this model through the Ultralytics App, the recognition of Raccoon or other objects is 100%. How should I solve this problem?

The model generated by the coco6 dataset YOLOv5n provided by the website and the model generated by the built-in YOLOv5n are normal, but the data set I marked myself and the YOLOv5 Pytorch dataset exported by the roboflow website will have this problem.

If it is normal to convert to TensorFlow Lite format, the recognition rate is about 70%.So,Could it be that there is an indescribable error in exporting the mlmodel of Ultralytics App iOS? Do you have any solution? thank you very much!

this is result model:
model_- 30 june 2022 11_41.pt.zip

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 30, 2022

@ys-ocean I'm not sure. Theoretically using the open-source YOLOv5 repo or using Ultralytics HUB you should get the same results, though again the Apple NMS is completely different from the PyTorch NMS used to Preview images.

Is this a problem you see only on your custom trained model or do you also see it on the default COCO models? Your model is single-class right? I wonder if the single-class part is causing differences.

@sergiossm FYI user is seeing 100% confidence on single-class HUB CoreML models. I'll investigate.

EDIT: I see that you say TFLite on Android works correctly, so this issue is isolated to CoreML single-class exports.

@glenn-jocher glenn-jocher added the todo Further action is needed by Ultralytics label Jun 30, 2022
@glenn-jocher
Copy link
Member

TODO: Investigate CoreML single-class export models for possible 100% confidence bug

@sergiossm
Copy link
Member

@ys-ocean Thank you for submitting your error. I'll try to replicate it on my end.

@ys-ocean
Copy link
Author

ys-ocean commented Jul 1, 2022

@ys-ocean I'm not sure. Theoretically using the open-source YOLOv5 repo or using Ultralytics HUB you should get the same results, though again the Apple NMS is completely different from the PyTorch NMS used to Preview images.

Is this a problem you see only on your custom trained model or do you also see it on the default COCO models? Your model is single-class right? I wonder if the single-class part is causing differences.

@sergiossm FYI user is seeing 100% confidence on single-class HUB CoreML models. I'll investigate.

EDIT: I see that you say TFLite on Android works correctly, so this issue is isolated to CoreML single-class exports.

Thank you for your reply!Yes, single-class case, but if I change the yaml in the dataset, I change nc to 2, and the others remain unchanged:

  1. It will also recognize other objects and target objects
  2. The accuracy rate is about 97%, nc is 1, and the recognition rate is 100%.

Found a few problems:

  1. On the Hub, based on the YOLOv5n preloaded model and the coco6 dataset provided by the Hub, nc=80, there are only 3 pictures in the images-train, and the final generated model will also recognize objects that do not belong to its own category.
  2. I tried two more datasets (https://public.roboflow.com/object-detection/na-mushrooms), although the recognition rate is not 100%, it is about 97%, but it will recognize other objects . I have a guess that when there is 1 class, it is 100%, and the recognition rate will decrease if you increase it sequentially, but objects that do not belong to the corresponding category will still be recognized.

This all happens with models converted to Ultralytics iOS and CoreML formats.The YOLOv5n, YOLOv5s and other models that come with Ultralytics Hub are very good and accurate on the iOS Ultralytics App.

@ys-ocean
Copy link
Author

ys-ocean commented Jul 1, 2022

@ys-ocean Thank you for submitting your error. I'll try to replicate it on my end.

Thank you for your attention to this issue, thank you for your help, I look forward to your reply!

@glenn-jocher
Copy link
Member

@ys-ocean I reviewed the iOS export code and don't see anything wrong. Single class is not handled separately, it uses the same exact export code and multi-class. Note that the iOS app uses an inference size of 192x320, whereas the Preview on the web is running at 640 by default (though you can also change to run at 320). Can you try the Preview at 320px to see if it increases confidence?

@ys-ocean
Copy link
Author

ys-ocean commented Jul 5, 2022

@ys-ocean I reviewed the iOS export code and don't see anything wrong. Single class is not handled separately, it uses the same exact export code and multi-class. Note that the iOS app uses an inference size of 192x320, whereas the Preview on the web is running at 640 by default (though you can also change to run at 320). Can you try the Preview at 320px to see if it increases confidence?

Thank you for your reply, I see that the original YOLOv5s, YOLOv5n, YOLOv5n6 etc. export Ultralytics App iOS format input-image size are all 192320. However, when training 320 or 640 from the Hub, the Ultralytics App iOS - mlmodel, the input-image size is 640, and the Hub and colab training Image Size cannot be set to 192320. I would like to ask if I need to train the 192*320 model and export it. mlmodel format?

@glenn-jocher
Copy link
Member

@ys-ocean yes that's correct. The commands to reproduce training and export of official models are:

python train.py --img 640
python export.py --img 320 192

Since scale augmentation is applied models trained at 640 are usable at 320.

@ys-ocean
Copy link
Author

ys-ocean commented Jul 6, 2022

@ys-ocean yes that's correct. The commands to reproduce training and export of official models are:

python train.py --img 640
python export.py --img 320 192

Since scale augmentation is applied models trained at 640 are usable at 320.

Thank you very much! I will try it.

@kalenmike kalenmike added the app Issue related to Ultralytics HUB App label Jul 11, 2022
@UltralyticsAssistant
Copy link
Member

You're welcome, @ys-ocean! If you encounter any further issues or have additional questions, don't hesitate to reach out. Wishing you the best of luck with your model training and export process! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
app Issue related to Ultralytics HUB App bug Something isn't working todo Further action is needed by Ultralytics
Projects
None yet
Development

No branches or pull requests

5 participants