-
Notifications
You must be signed in to change notification settings - Fork 26.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
image-segmentation
pipeline: re-enable small_model_pt
test.
#19716
Conversation
The documentation is not available anymore as the PR was closed or merged. |
Re-enable `small_model_pt`. Enabling the current test with the current values. Debugging the values on the CI. More logs ? Printing doesn't work ? Using the CI values instead. Seems to be a Pillow sensitivity.
I re-enable this tests, as it should work. The error @alaradirik was seeing, seems to not happen on main, and the results are actually deterministic. However The old tests used to be I kept the old values since I think something is wrong currently, but I wanted to re-enable the test before doing any modification to the actual pipeline code. @ydshieh I pinged you to see if you had better ideas to make tests less sensitive to the We could use |
I will check this pipeline to get a better idea of it, and see if I have any idea 👀 👀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, I'm looking into the post_process methods of DETR to pinpoint the issue.
Co-authored-by: Alara Dirik <8944735+alaradirik@users.noreply.github.com>
def test_small_model_pt(self): | ||
model_id = "hf-internal-testing/tiny-detr-mobilenetsv3-panoptic" | ||
|
||
model = AutoModelForImageSegmentation.from_pretrained(model_id) | ||
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id) | ||
image_segmenter = ImageSegmentationPipeline(model=model, feature_extractor=feature_extractor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to check my understanding: we're testing two calls to the image_segmenter pipeline, one with one input image, the other with 2, and previously the "panoptic" task was tested and then "semantic" - whereas now both calls are for "semantic". Is the reason for this to make sure we're only testing one thing i.e. the pipeline can take batched and non-batched inputs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Batching is actually taken care somewhere else too for random models.
A list is not batched necessarily, but it should return a list of what could have been inferred individually.
pipe([a, b]) == [pipe(a), pipe(b)]
in some sense.
Not all pipelines respect that unfortunately (the old ones mostly we never modified their behavior to keep things compatible).
The second test with the list was probably from somewhere where it was tested against and just updated I think.
The main property of small_model_pt
is that we are checking real values that should be fixed for time eternal, meaning we should never deviate from the values, because that would be a breaking change.
It can happen that updates in the dependencies like torch
or Pillow
trigger changes in the output values, and that is fine (since we are not responsible for the change).
Any other kind of change is breaking backward compatiblity and should be done with caution and be documented as to why we're doing it. I think the current way of thinking is that the older something in the library is, the less likely it's ok to break anything with regards to it. If it was just added, then it's ok.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
previously the "panoptic" task was tested and then "semantic"
No previously only the panoptic
was tested (it was defaulted to) for both.
But I'm switching here because panoptic
appears to be broken, as it does not output anything (instead of outputting either 2 masks like before, or something resembling what semantic
outputs).
Since I want to test against values that make sense I'm using semantic
which seem OK.
When we find what's wrong with panoptic
(or why the assumption that panoptic
should return at least as much information as semantic
is wrong) we can add them as tests again.
And then this test would start checking both values, and that panoptic
> semantic
in expressiveness (which afaik should always hold)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for taking the time to write out such detailed explanations ❤️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Can you make sure the title is more explicit (in particular contains image segmentation pipeline ;-) ) before merging?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for adding the test back in and for the detailed explanation of the pipeline tests.
image-segmentation
pipeline: re-enable small_model_pt
test.
…ingface#19716) * Re-enable `small_model_pt`. Re-enable `small_model_pt`. Enabling the current test with the current values. Debugging the values on the CI. More logs ? Printing doesn't work ? Using the CI values instead. Seems to be a Pillow sensitivity. * Update src/transformers/pipelines/image_segmentation.py Co-authored-by: Alara Dirik <8944735+alaradirik@users.noreply.github.com> Co-authored-by: Alara Dirik <8944735+alaradirik@users.noreply.github.com>
Fixes # (issue)
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.