Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]Add a general data structrue for the results of models #5508

Merged
merged 38 commits into from
Sep 24, 2021

Conversation

jshilong
Copy link
Collaborator

@jshilong jshilong commented Jul 2, 2021

Motivation

  1. For the potential instance-level tasks such as Instance Segmentation and Panoptic Segmentation, the mask head usually need some intermediate results from the detection head when in the training phase, such as assign results, which would make the return type of detection head become more and more complex, we need a more general data-structure to make the return type more general.
  2. As a feature in the plan, we would add post-process registry and support to custom the post-process pipeline with configs(like customing the Augmentation with train_pipline in config ), it also needs a general data structure to maintain all the model's predictions

Modification

I add GeneralData and InstanceData to results.py and add the corresponding unitest.

BC-breaking

None

Use cases

For GeneralData

        >>> from mmdet.core import GeneralData
        >>> img_meta = dict(img_shape=(800, 1196, 3), pad_shape=(800, 1216, 3))
        >>> results = GeneralData(meta_info=img_meta)
        >>> img_shape in results
        True
        >>> results.det_labels = torch.LongTensor([0, 1, 2, 3])
        >>> results["det_scores"] = torch.Tensor([0.01, 0.1, 0.2, 0.3])
        >>> print(results)
        <GeneralData(

          META INFORMATION
        img_shape: (800, 1196, 3)
        pad_shape: (800, 1216, 3)

          DATA FIELDS
        shape of det_labels: torch.Size([4])
        shape of det_scores: torch.Size([4])

        ) at 0x7f84acd10f90>
        >>> resutls.det_scores
        tensor([0.0100, 0.1000, 0.2000, 0.3000])
        >>> results.det_labels
        tensor([0, 1, 2, 3])
        >>> results['det_labels']
        tensor([0, 1, 2, 3])
        >>> 'det_labels' in results
        True
        >>> results.img_shape
        (800, 1196, 3)
        >>> 'det_scores' in results
        True
        >>> del results.det_scores
        >>> 'det_scores' in results
        False
        >>> det_labels = results.pop('det_labels', None)
        >>> det_labels
        tensor([0, 1, 2, 3])
        >>> 'det_labels' in results
        >>> False

for InstanceData

        >>> from mmdet.core import InstanceData
        >>> import numpy as np
        >>> img_meta = dict(img_shape=(800, 1196, 3), pad_shape=(800, 1216, 3))
        >>> results = Instances(img_meta)
        >>> img_shape in results
        True
        >>> results.det_labels = torch.LongTensor([0, 1, 2, 3])
        >>> results["det_scores"] = torch.Tensor([0.01, 0.7, 0.6, 0.3])
        >>> results["det_masks"] = np.ndarray(4, 2, 2)
        >>> len(results)
        4
        >>> print(resutls)
        <InstanceData(

            META INFORMATION
        pad_shape: (800, 1216, 3)
        img_shape: (800, 1196, 3)

            PREDICTIONS
        shape of det_labels: torch.Size([4])
        shape of det_masks: (4, 2, 2)
        shape of det_scores: torch.Size([4])

        ) at 0x7fe26b5ca990>
        >>> sorted_results = results[results.det_scores.sort().indices]
        >>> sorted_results.det_scores
        tensor([0.0100, 0.3000, 0.6000, 0.7000])
        >>> sorted_results.det_labels
        tensor([0, 3, 2, 1])
        >>> print(results[results.scores > 0.5])
        <InstanceData(

            META INFORMATION
        pad_shape: (800, 1216, 3)
        img_shape: (800, 1196, 3)

            PREDICTIONS
        shape of det_labels: torch.Size([2])
        shape of det_masks: (2, 2, 2)
        shape of det_scores: torch.Size([2])

        ) at 0x7fe26b6d7790>
        >>> results[results.det_scores > 0.5].det_labels
        tensor([1, 2])
        >>> results[results.det_scores > 0.5].det_scores
        tensor([0.7000, 0.6000])

@codecov
Copy link

codecov bot commented Jul 5, 2021

Codecov Report

Merging #5508 (9859f23) into master (5ef56c1) will increase coverage by 0.26%.
The diff coverage is 91.30%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #5508      +/-   ##
==========================================
+ Coverage   61.48%   61.75%   +0.26%     
==========================================
  Files         306      309       +3     
  Lines       24174    24381     +207     
  Branches     4005     4067      +62     
==========================================
+ Hits        14864    15057     +193     
- Misses       8518     8525       +7     
- Partials      792      799       +7     
Flag Coverage Δ
unittests 61.73% <91.30%> (+0.26%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmdet/core/data_structures/general_data.py 89.62% <89.62%> (ø)
mmdet/core/data_structures/instance_data.py 94.11% <94.11%> (ø)
mmdet/core/__init__.py 100.00% <100.00%> (ø)
mmdet/core/data_structures/__init__.py 100.00% <100.00%> (ø)
mmdet/core/bbox/samplers/random_sampler.py 75.00% <0.00%> (-5.56%) ⬇️
mmdet/models/roi_heads/mask_heads/maskiou_head.py 89.65% <0.00%> (+2.29%) ⬆️
mmdet/utils/util_mixins.py 43.47% <0.00%> (+17.39%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5ef56c1...9859f23. Read the comment docs.

mmdet/core/results/results.py Outdated Show resolved Hide resolved
mmdet/core/results/results.py Outdated Show resolved Hide resolved
tests/test_utils/test_results.py Outdated Show resolved Hide resolved
mmdet/core/results/results.py Outdated Show resolved Hide resolved
mmdet/core/results/results.py Outdated Show resolved Hide resolved
mmdet/apis/test.py Outdated Show resolved Hide resolved
mmdet/core/general_data/general_data.py Outdated Show resolved Hide resolved
mmdet/core/general_data/general_data.py Outdated Show resolved Hide resolved
mmdet/core/general_data/general_data.py Outdated Show resolved Hide resolved
mmdet/core/general_data/general_data.py Outdated Show resolved Hide resolved
tests/test_utils/test_general_data.py Outdated Show resolved Hide resolved
@RangiLyu
Copy link
Member

All parameters in the unit tests and doc examples are named results. May need to be renamed all. 😅

@jshilong
Copy link
Collaborator Author

All parameters in the unit tests and doc examples are named results. May need to be renamed all.

DONE

mmdet/apis/test.py Outdated Show resolved Hide resolved
mmdet/core/results/results.py Outdated Show resolved Hide resolved
mmdet/core/results/results.py Outdated Show resolved Hide resolved
mmdet/core/results/results.py Outdated Show resolved Hide resolved
mmdet/core/general_data/general_data.py Outdated Show resolved Hide resolved
mmdet/core/general_data/general_data.py Outdated Show resolved Hide resolved
@ZwwWayne
Copy link
Collaborator

How about renaming the module as mmdet.core.data_structures rather than mmdet.core.general_data?

Copy link
Member

@RangiLyu RangiLyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jshilong
Copy link
Collaborator Author

How about renaming the module as mmdet.core.data_structures rather than mmdet.core.general_data?

DONE

@ZwwWayne ZwwWayne merged commit c57e3ee into open-mmlab:master Sep 24, 2021
@jshilong jshilong changed the title Add a general data structrue for the results of models [Feature]Add a general data structrue for the results of models Sep 24, 2021
ZwwWayne pushed a commit to ZwwWayne/mmdetection that referenced this pull request Jul 19, 2022
* init commit for resutls

* add results and instance results

* add docstr

* add more unitets

* add more unitets

* add more unitets

* add more unintest

* add unitet for instance results

* add example

* add meta_info_keys results_keys

* add modified from

* fix unitets

* fix typo

* add format_results

* add testfor formatr

* add unitest for format_results

* add unitest

* support detection results in test.py

* add more detailed comments

* resolve comments

* fix rle encode

* fix results

* fix results

* rename

* revert test

* fix import in example

* fix unitest

* add more uintest

* add more unites

* add more unitest

* rename meta to meta_info

* fix docstr

* fix doc

* fix some default value and function name

* fix doc and move isntancedata to a new file

* fix typo

* fix unitest in torch 13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants