You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's time to streamline finetuning in YOLOv5. It's become evident that finetuning and training from scratch likely require different set of hyps to each perform their best. I see two action items for this:
Remove hardcoded hyps from train.py into a seperate coco.hyp.yaml. This sets the stage for defining a second coco.finetune.hyp.yaml with new hyps defined via hyp evolution on a finetune scenario (probably VOC 50 epochs).
Update the train configuration to allow single-file start, vs today's dual-file requirements (i.e. a pair of yolov5s.yaml and yolov5s.pt). This change is now available as the v2.0 format embeds a model's original yaml as a class property:
self.yaml=yaml.load(f, Loader=yaml.FullLoader) # model dict
This should allow us to better define default training settings depending on whether finetune mode is used or not. finetune model would automatically be set to true if a *.pt model argument was supplied to train.py instead of a *.yaml argument. Drawback is that all the tutorials will need updating :(
The text was updated successfully, but these errors were encountered:
馃殌 Feature
It's time to streamline finetuning in YOLOv5. It's become evident that finetuning and training from scratch likely require different set of hyps to each perform their best. I see two action items for this:
Remove hardcoded hyps from train.py into a seperate coco.hyp.yaml. This sets the stage for defining a second coco.finetune.hyp.yaml with new hyps defined via hyp evolution on a finetune scenario (probably VOC 50 epochs).
Update the train configuration to allow single-file start, vs today's dual-file requirements (i.e. a pair of yolov5s.yaml and yolov5s.pt). This change is now available as the v2.0 format embeds a model's original yaml as a class property:
yolov5/models/yolo.py
Lines 64 to 67 in d7cfbc4
This should allow us to better define default training settings depending on whether finetune mode is used or not. finetune model would automatically be set to true if a *.pt model argument was supplied to train.py instead of a *.yaml argument. Drawback is that all the tutorials will need updating :(
The text was updated successfully, but these errors were encountered: