Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explanation of results #85

Open
the-it-weirdo opened this issue Apr 12, 2024 · 11 comments
Open

Explanation of results #85

the-it-weirdo opened this issue Apr 12, 2024 · 11 comments

Comments

@the-it-weirdo
Copy link

Hello, I trained a YOLOv8 model with 53 classes (all belonging to indoor environments) selected from the MS-COCO dataset. I trained the model for a 100 epochs with default settings. And these are the results I received.

training_results

Can someone please explain the results to me?

This is what I understand:

  • The mAP50 metrics is at 0.469. This means that the model is correctly identifying and localizing the objects about 46.9% of the time when a 50% overlap with the ground truth bounding boxes is considered a correct detection.
  • The mAP50-95, which is the average measure of model’s performance across IoU thresholds from 0.50 to 0.95 is at 0.331. This means the performance of the model drops when a stricter localization criterion is applied. This is a common issue because it is more challenging to have a high degree of overlap for correct detections, but it shows that the model has room for improvement in terms of precision of bounding box predictions.
  • In object detection, especially with a large number of classes (53 in this case), achieving high mAP values can be challenging. The mAP at IoU=0.5 is decent, suggesting that the model can detect objects with a fair amount of accuracy when a lower threshold for overlap is set.
  • The box loss is the bounding box regression loss which measures the error in predicted bounding box compared to the ground truth. Lower box loss means the predicted bounding boxes are more accurate. The training loss for box is 1.11 and validation is 1.125.
  • The classification loss (cls_loss) measures the error in the predicted class probabilities for each object in the image compared to the ground truth. Lower classification loss means the model is more accurately predicting the class of an object. The classification loss is 1.175 for training and 1.227 for validation.
  • The deformable convolutional layer loss (dfl_loss) measures the error in deformable convolutional layers, which are designed to improve model’s ability to detect objects with various scales and aspect ratios. A lower dfl_loss indicates that the model is better at handling object deformations and variations in appearance. The dfl loss for training is 1.179 and validation is 1.166.
  • All three losses are decreasing over epochs, which is a good sign indicating that the model is learning.
  • There's a significant drop early in training (before epoch 5), followed by a plateau, which is common as the model starts to converge.
  • The patterns for the validation loss are similar to the training losses, but the validation losses are generally higher than the training losses.

Did I miss anything in my understanding of the results? Can I improve the results? If so, how?

@pderrenger
Copy link
Member

@the-it-weirdo hello!

Your understanding of the results is spot-on! 🌟 You've captured the essence of what the reported metrics and loss values signify for your YOLOv8 model trained on a diverse set of indoor classes. Here's a brief additional insight and a tip for improvement:

  1. Understanding Overfitting: The observation that validation losses are slightly higher than training losses suggests a hint of overfitting. It's minimal but worth noting. Regular monitoring helps in identifying when the model begins to learn the training data too closely at the expense of its generalization ability.

  2. Improving Results: To further improve your model, consider experimenting with data augmentation techniques to increase the diversity of your training dataset, thus helping the model generalize better. Additionally, fine-tuning hyperparameters, such as learning rate or the number of epochs, can also lead to better model performance. Refer to the best practices on hyperparameter tuning and advanced training techniques in the Ultralytics Docs for guidance.

Remember, model improvement is an iterative process. Small adjustments can lead to significant gains in performance. Keep exploring different strategies!

Happy modeling! 🚀

@the-it-weirdo
Copy link
Author

Hello,

Thank you for your reply and suggestions. Apologies on the late response on my part. However, is there a way we can add axis titles to the graphs generated during training or is there a way to download the graphs?

@pderrenger
Copy link
Member

@the-it-weirdo hello!

No worries about the delay! To add axis titles to the graphs generated during training or to download them, you can use TensorBoard, which integrates well with YOLOv8. TensorBoard automatically logs training metrics like losses and mAP, and you can customize the plots with titles or download them directly from the TensorBoard UI.

Here's a quick setup snippet if you're not already using it:

from torch.utils.tensorboard import SummaryWriter

# Initialize the writer
writer = SummaryWriter('runs/your_experiment_name')

# During training, log metrics
writer.add_scalar('Loss/train', loss_value, global_step)

To view the graphs, run TensorBoard in your terminal:

tensorboard --logdir=runs

Navigate to the provided URL to view and interact with your training metrics graphs, including adding titles and downloading them.

Happy training! 🚀

@the-it-weirdo
Copy link
Author

Hello @pderrenger

Thank you for the guide on Tensorboard. I appreciate it.

I have already performed a training run using Ultralytics cloud training. And the graph I posted earlier was a screenshot from the training results in the dashboard. I was wondering if there's a better way to download the graphs instead of screenshots and if I could add axis titles to them. Thank you 😊

@pderrenger
Copy link
Member

Hello @the-it-weirdo,

Glad you found the guide helpful! 😊 For downloading graphs directly and adding axis titles, using TensorBoard is your best bet. If you've trained using Ultralytics cloud, you can download the TensorBoard logs from the cloud dashboard and run TensorBoard locally:

tensorboard --logdir=path_to_your_downloaded_logs

This will allow you to view, customize, and download the graphs directly from the TensorBoard interface on your local machine.

Happy analyzing! 🚀

@the-it-weirdo
Copy link
Author

Thank you 😊 @pderrenger

@pderrenger
Copy link
Member

You're welcome! If you have any more questions or need further assistance, feel free to ask. Happy coding! 😊

@the-it-weirdo
Copy link
Author

Hello, where can I find the option to download the logs from the cloud dashboard? I used cloud training in Ultralytics hub: uploaded the dataset, selected a model and trained it.

@pderrenger
Copy link
Member

Hello!

To download the logs from the Ultralytics cloud dashboard, you should be able to find a "Download Logs" or similar option in the dashboard where your training results are displayed. Typically, this option is located near the summary statistics or at the end of the training job details. If you're having trouble finding it, the dashboard often includes a "Help" or "Support" section that can guide you through the process.

If you need further assistance, don't hesitate to reach out!

Happy training! 🚀

@the-it-weirdo
Copy link
Author

the-it-weirdo commented Jun 4, 2024

Hello, I am unable to find the a "Download logs" or a similar option in the cloud dashboard.

This is what tensorboard shows me when I tried to use the "Share" url:
image

I tried making the url public and still there was no data.

@pderrenger
Copy link
Member

@the-it-weirdo hello,

Thank you for reaching out and providing the screenshot. It seems there might be an issue with the TensorBoard setup or the log files might not be properly linked.

First, ensure that your training session is correctly configured to save TensorBoard logs. If you've confirmed this and still face issues, it might be beneficial to directly contact Ultralytics support for specific guidance on accessing logs from the cloud dashboard, as they can provide detailed assistance tailored to your account and training setup.

In the meantime, double-check that your training sessions are completing successfully and that the logs are not empty or corrupted. This can sometimes cause issues with TensorBoard visualization.

Let us know how it goes or if you need further assistance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants