-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Converting a custom model to mlmodel is 100% no matter what object it recognizes #51
Comments
👋 Hello @ys-ocean, thank you for raising an issue about Ultralytics HUB 🚀! Please visit https://ultralytics.com/hub to learn more, and see our ⭐️ HUB Guidelines to quickly get started uploading datasets and training YOLOv5 models. If this is a 🐛 Bug Report, please provide screenshots and steps to recreate your problem to help us get started working on a fix. If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response. We try to respond to all issues as promptly as possible. Thank you for your patience! |
@ys-ocean your training results look fine. If you're asking why the Preview is showing 88% confidence and the app 100% it's hard to tell. The app is running on a different image at a different image size with different pipelined NMS, so it's not exactly an apple to apple comparison here, in which case you should expect differences. |
Hi, first of all thanks for your reply, but I don't quite understand how to solve it. Now I downloaded a public dataset Raccoon Dataset(https://public.roboflow.com/object-detection/raccoon/2/download/yolov5pytorch) from the roboflow website(https://roboflow.com/), exported the YOLOv5 Pytorch format, uploaded it to the Ultralytics Hub, selected YOLOv5n, and the others are all default parameters to regenerate a Model, or the same problem I have on Export & Test The test is basically 70%-90%, and when I login and select this model through the Ultralytics App, the recognition of Raccoon or other objects is 100%. How should I solve this problem? The model generated by the coco6 dataset YOLOv5n provided by the website and the model generated by the built-in YOLOv5n are normal, but the data set I marked myself and the YOLOv5 Pytorch dataset exported by the roboflow website will have this problem. If it is normal to convert to TensorFlow Lite format, the recognition rate is about 70%.So,Could it be that there is an indescribable error in exporting the mlmodel of Ultralytics App iOS? Do you have any solution? thank you very much! this is result model: |
@ys-ocean I'm not sure. Theoretically using the open-source YOLOv5 repo or using Ultralytics HUB you should get the same results, though again the Apple NMS is completely different from the PyTorch NMS used to Preview images. Is this a problem you see only on your custom trained model or do you also see it on the default COCO models? Your model is single-class right? I wonder if the single-class part is causing differences. @sergiossm FYI user is seeing 100% confidence on single-class HUB CoreML models. I'll investigate. EDIT: I see that you say TFLite on Android works correctly, so this issue is isolated to CoreML single-class exports. |
TODO: Investigate CoreML single-class export models for possible 100% confidence bug |
@ys-ocean Thank you for submitting your error. I'll try to replicate it on my end. |
Thank you for your reply!Yes, single-class case, but if I change the yaml in the dataset, I change nc to 2, and the others remain unchanged:
Found a few problems:
This all happens with models converted to Ultralytics iOS and CoreML formats.The YOLOv5n, YOLOv5s and other models that come with Ultralytics Hub are very good and accurate on the iOS Ultralytics App. |
Thank you for your attention to this issue, thank you for your help, I look forward to your reply! |
@ys-ocean I reviewed the iOS export code and don't see anything wrong. Single class is not handled separately, it uses the same exact export code and multi-class. Note that the iOS app uses an inference size of 192x320, whereas the Preview on the web is running at 640 by default (though you can also change to run at 320). Can you try the Preview at 320px to see if it increases confidence? |
Thank you for your reply, I see that the original YOLOv5s, YOLOv5n, YOLOv5n6 etc. export Ultralytics App iOS format input-image size are all 192320. However, when training 320 or 640 from the Hub, the Ultralytics App iOS - mlmodel, the input-image size is 640, and the Hub and colab training Image Size cannot be set to 192320. I would like to ask if I need to train the 192*320 model and export it. mlmodel format? |
@ys-ocean yes that's correct. The commands to reproduce training and export of official models are: python train.py --img 640
python export.py --img 320 192 Since scale augmentation is applied models trained at 640 are usable at 320. |
Thank you very much! I will try it. |
You're welcome, @ys-ocean! If you encounter any further issues or have additional questions, don't hesitate to reach out. Wishing you the best of luck with your model training and export process! 🚀 |
Search before asking
HUB Component
Inference
Bug
Hi, I found a strange problem. It is normal for the customized models on the Hub to be converted into online test recognition. The models downloaded from the Ultralytics App are 100% recognized, and objects beyond the model can be recognized. The same is true for exporting Ultralytics iOS models. If you have any ideas or answers, please let me know, thank you very much!
this is datasets.
key.zip
)
Environment
Macbook Pro,Xcode
Minimal Reproducible Example
No response
Additional
No response
The text was updated successfully, but these errors were encountered: