Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After Model training on HUB, while testing model getting below error #682

Closed
1 task done
kumarneeraj2005 opened this issue May 13, 2024 · 33 comments
Closed
1 task done
Assignees
Labels
bug Something isn't working fixed Bug is resolved

Comments

@kumarneeraj2005
Copy link

Search before asking

  • I have searched the HUB issues and found no similar bug report.

HUB Component

Inference

Bug

Once model training on HUB Pro is completed, during preview model testing for pose detection, your platform reports a problem... could you kindly tell me what the reason is?
WhatsApp Image 2024-05-13 at 10 44 50

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

@kumarneeraj2005 kumarneeraj2005 added the bug Something isn't working label May 13, 2024
@sergiuwaxmann sergiuwaxmann changed the title HubPro- After Model training on HUB, while testing model getting below error After Model training on HUB, while testing model getting below error May 13, 2024
@sergiuwaxmann sergiuwaxmann self-assigned this May 13, 2024
@sergiuwaxmann
Copy link
Member

@kumarneeraj2005 We just tested Pose inference on our end using a model trained on the Ultralytics COCO8-POSE Dataset and everything seems to be working fine.

The error you are seeing could be happening because of:

  1. Your model (very unlikely)
  2. Something wrong with our shared inference endpoint

Can you please share your model ID with us so we can investigate this further?

@kumarneeraj2005
Copy link
Author

@sergiuwaxmann
image
please check these 3 models which i trained using your Hub Pro, look at the size of models and model name, its unusual Pose m model size is bigger that pose l model.

@sergiuwaxmann
Copy link
Member

@kumarneeraj2005 The size shown in the screenshot you shared represents the size of the model + all exported formats. Just by looking at the screenshot, I imagine the first two models are identical - the only difference being: some exports performed on the the second one.

Please share the model IDs (ID in the URL on the model page).

@kumarneeraj2005
Copy link
Author

@sergiuwaxmann
Copy link
Member

@kumarneeraj2005 Looks like inference is working correctly for the model ID you provided.
API response:

{
    "data": [
        {
            "class": 0,
            "confidence": 0.952075719833374,
            "keypoints": [
                0.596757173538208,
                0.6352880597114563,
                1.0,
                0.6028388142585754,
                0.694744348526001,
                1.0,
                0.6017828583717346,
                0.7683327198028564,
                1.0
            ],
            "name": "kidney"
        }
    ],
    "message": "Inference complete.",
    "success": true
}

Do you still have this issue?

@kumarneeraj2005
Copy link
Author

@sergiuwaxmann Yes, I understand; it is working perfectly for you and for me, but a few photographs are presenting that issue. I discovered the problem: your platform's error handling is ineffective.

@sergiuwaxmann
Copy link
Member

@kumarneeraj2005 What do you mean by "error handling is ineffective"? Can you maybe explain what is the issue you are facing (Minimal Reproducible Example) and share one of the images that is causing the issue so that we can improve our platform?

@kumarneeraj2005
Copy link
Author

@sergiuwaxmann Keep the threshold high, for example, 80, and if the picture object is faded (yolo could not recognize the object and returned null), the platform should display the correct message instead of an error message.

@sergiuwaxmann
Copy link
Member

@kumarneeraj2005

I can confirm this issue (POSE model):
no_results_pose

Expected behavior (DETECT model):
no_results_detect

We will fix this issue in the next release (following days) - will keep you updated.

Thank you for bringing this to our attention!

@suren1986
Copy link

I`m also having this problem. My old models works fine, but new model does not.

New model: https://hub.ultralytics.com/models/MFE4Iwe37kJRfqy9440W
Old model: https://hub.ultralytics.com/models/yWb4FSzpQ9wBXqf5WzCX

Beside the preview error, I also meet another error when exporting, while the old model works fine.
图片

Thank you for the hard work and expect this issue to be fixed soon.

@sergiuwaxmann
Copy link
Member

sergiuwaxmann commented May 21, 2024

@suren1986 The POSE preview issue was solved and the fix will be deployed in the next release (by the looks of it, tomorrow by EOD). If you are having a similar issue for segmentation, it might be the same problem with #691. Can you maybe share an image that has this issue? Or does this issue occur for any image?

Regarding the export, does this issue occur for any new models or just for the model you shared? Can you train a new model for 2-3 epochs and check if you have the same issue? Also, maybe before training a new model, can you try again (it could be that the server faced a temporary issue)? If you still have this issue, can you open a new issue in order to properly log this issue and so that other users can easily find the discussion?

@sctcorp01
Copy link

I have same problem either.
All of images are failed to doing preview of trained model.

I tried changing both 'Confidence Threshold' and 'IoU Threshold", but it still doesn't works.

Here's my model ID : https://hub.ultralytics.com/models/D9ttDl0mWIyPHisfp4lQ

capture

@pderrenger
Copy link
Member

@sctcorp01 hi there! Thanks for sharing the details. It seems like you're encountering a known issue with the preview functionality for certain models. We are actively working on a fix for this. In the meantime, could you please try running your model using the API directly with a sample image to see if the inference works outside of the preview environment? Here's a quick example on how to do this using Python:

import requests

# Replace 'MODEL_ID' and 'API_KEY' with your actual model ID and API key
url = "https://api.ultralytics.com/v1/predict/D9ttDl0mWIyPHisfp4lQ"
headers = {"x-api-key": "your_api_key_here"}

with open("path/to/your/image.jpg", "rb") as image_file:
    files = {"image": image_file}
    response = requests.post(url, headers=headers, files=files)
    print(response.json())

This might help us understand if the issue is specific to the preview or a broader problem with the model. Let us know how it goes! 🚀

@suren1986
Copy link

@suren1986 The POSE preview issue was solved and the fix will be deployed in the next release (by the looks of it, tomorrow by EOD). If you are having a similar issue for segmentation, it might be the same problem with #691. Can you maybe share an image that has this issue? Or does this issue occur for any image?

Regarding the export, does this issue occur for any new models or just for the model you shared? Can you train a new model for 2-3 epochs and check if you have the same issue? Also, maybe before training a new model, can you try again (it could be that the server faced a temporary issue)? If you still have this issue, can you open a new issue in order to properly log this issue and so that other users can easily find the discussion?

Thank you for your reply.
I have share the image with error in the reply #691 (comment)
This error happend in every image I previewed with the model.

@sctcorp01
Copy link

@sctcorp01 hi there! Thanks for sharing the details. It seems like you're encountering a known issue with the preview functionality for certain models. We are actively working on a fix for this. In the meantime, could you please try running your model using the API directly with a sample image to see if the inference works outside of the preview environment? Here's a quick example on how to do this using Python:

import requests

# Replace 'MODEL_ID' and 'API_KEY' with your actual model ID and API key
url = "https://api.ultralytics.com/v1/predict/D9ttDl0mWIyPHisfp4lQ"
headers = {"x-api-key": "your_api_key_here"}

with open("path/to/your/image.jpg", "rb") as image_file:
    files = {"image": image_file}
    response = requests.post(url, headers=headers, files=files)
    print(response.json())

This might help us understand if the issue is specific to the preview or a broader problem with the model. Let us know how it goes! 🚀

Hola!
Thank you for your answer.

I tried running your Python code in Visual Studio Code with .png image file.
But It doesn't works.

Here's my log message.
{'message': 'Unhandled server error.', 'success': False}

Thanks a lot.

@sergiuwaxmann
Copy link
Member

@kumarneeraj2005 @suren1986 @sctcorp01
New release 🚀 Inference and exports should work fine now.

@sergiuwaxmann sergiuwaxmann added the fixed Bug is resolved label May 22, 2024
@suren1986
Copy link

@kumarneeraj2005 @suren1986 @sctcorp01 New release 🚀 Inference and exports should work fine now.

Excellent! Thank you for your hard work!

@kumarneeraj2005
Copy link
Author

image
@suren1986 @sergiuwaxmann It appears that after deployment, functionality broke, and inference is no longer working; please view the attached picture.

@sergiuwaxmann
Copy link
Member

@kumarneeraj2005 I can’t reproduce this issue, but I will investigate further. When you export your model, the weights used for inference do not change.

Are you sure you were receiving inference results previously on the image you are trying now? I ask because it might simply be that the model is unable to detect anything in the current image.

@kumarneeraj2005
Copy link
Author

kumarneeraj2005 commented May 26, 2024

image Yes @sergiuwaxmann I am confirming earlier it was functioning, I have trained three pose models, previously all three models were working well, but suddenly nothing is working, that is, it is not detecting. on your platform. But same model working on my local system after export.

@sergiuwaxmann
Copy link
Member

@kumarneeraj2005 Can you please share the IDs of these models?

@kumarneeraj2005
Copy link
Author

image All three pose models, trained on your Hub Platform.

@sergiuwaxmann
Copy link
Member

@kumarneeraj2005 I understand.
The model ID is available in the URL of the model. Please share the IDs or URLs so I can identify the models.

@sergiuwaxmann
Copy link
Member

@kumarneeraj2005 Thank you!

@kumarneeraj2005
Copy link
Author

kumarneeraj2005 commented May 28, 2024

@kumarneeraj2005 Thank you!

Could you please let me know , if it's bug or not ?

@yogendrasinghx
Copy link
Member

Hi @kumarneeraj2005,

We have internally checked the issue and are unable to reproduce it on our end. We trained the models using the official COCO8-pose dataset and found that after training, the inference is working fine. Additionally, there are no issues with exports.

You can verify the model using this link: https://hub.ultralytics.com/models/Y88Hm7b757UzLUO0ZSju?tab=preview

Please let us know if there are any specific steps or configurations you are using that might help us replicate the problem.

@kumarneeraj2005
Copy link
Author

@sergiuwaxmann @yogendrasinghx
image
Again old issue is coming out, seems your system is broken after bug fixes.
image

@sergiuwaxmann
Copy link
Member

@kumarneeraj2005 I will investigate this again. Thank you!

@kumarneeraj2005
Copy link
Author

@sergiuwaxmann @pderrenger Could you please tell me if you have a prompt support system? If your platform isn't operating, what's the sense of paying a monthly fee? I'd want to cancel my membership; it appears that your platform is not yet ready for production.

@sergiuwaxmann
Copy link
Member

sergiuwaxmann commented May 29, 2024

@kumarneeraj2005 I apologize for the inconvenience.

Based on our tests, inference works fine with the official POSE datasets and models. The inference error you are facing is hard to reproduce as we don't have access to your dataset, reason why it takes time to debug.

You can cancel your subscription at any time from the Settings page under the Billing & License tab by clicking on the Manage Subscription button.
SCR-20240529-kac
This will open the subscription portal, where you can click on the Cancel plan button.
SCR-20240529-kbb

@kumarneeraj2005
Copy link
Author

Ok i am cancelling the Paid HUB subscription.,

@sergiuwaxmann
Copy link
Member

@kumarneeraj2005 Once again, I apologize for the inconvenience. Thank you for your time with us!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working fixed Bug is resolved
Projects
None yet
Development

No branches or pull requests

6 participants