Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Support KLD metric and support evaluation for probabilistic models #108

Merged
merged 6 commits into from
Sep 15, 2021

Conversation

LeoXing1996
Copy link
Collaborator

Metric Design

Different from gan metrics, probabilistic metrics:

  1. calculated with reconstruction tasks, and can
  2. calculated by a group of parameters but not fake and real images

Therefore, we design a list called probabilistic_metric_name to contain probabilistic metrics in single_gpu_online_evaluation . When evaluation with those metric separately.

When evaluating those probabilistic metrics, we use set forward mode as reconstruction.
We default that all probabilistic model all support this mode. Unlike forward_test which starts with random noise, mode=reconstruction performs a reconstruction behavior with given data and returns a dict containing desired probabilistic parameters.

We also slightly modify the batch truncation operation to make the Metric.feed support dict input.

Some further design about mode=reconstruction

Although we have not implemented any code, we find a specific function to release reconstruction operation is critical for probabilistic models (e.g., DDPM).
In train_step, reconstruction, loss calculation, and update operation are performed altogether.
For forward_test, the interface is fixed and called by sample_from_noise to perform a random generation process.
Therefore, we need a function to implement a separate reconstruction process and return all the intermedia probabilistic parameters. And this function can also be called by train_step.

@codecov-commenter
Copy link

codecov-commenter commented Sep 7, 2021

Codecov Report

Merging #108 (1146a8f) into master (f8d6f2e) will decrease coverage by 0.78%.
The diff coverage is 77.25%.

❗ Current head 1146a8f differs from pull request most recent head 736a71b. Consider uploading reports for the commit 736a71b to get more accurate results
Impacted file tree graph

@@            Coverage Diff             @@
##           master     #108      +/-   ##
==========================================
- Coverage   76.06%   75.28%   -0.79%     
==========================================
  Files         118      121       +3     
  Lines        8089     8188      +99     
  Branches     1519     1561      +42     
==========================================
+ Hits         6153     6164      +11     
- Misses       1546     1609      +63     
- Partials      390      415      +25     
Flag Coverage Δ
unittests 75.28% <77.25%> (-0.79%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmgen/core/evaluation/evaluation.py 7.30% <0.00%> (-0.98%) ⬇️
...rchitectures/sngan_proj/generator_discriminator.py 88.10% <ø> (ø)
mmgen/models/gans/__init__.py 100.00% <ø> (ø)
mmgen/models/gans/base_gan.py 68.13% <ø> (ø)
mmgen/models/losses/utils.py 83.33% <0.00%> (-5.96%) ⬇️
mmgen/utils/collect_env.py 20.00% <0.00%> (-1.16%) ⬇️
mmgen/datasets/unpaired_image_dataset.py 83.33% <72.72%> (-1.29%) ⬇️
...odels/translation_models/static_translation_gan.py 74.57% <74.57%> (ø)
mmgen/core/evaluation/metrics.py 75.36% <76.92%> (-0.04%) ⬇️
mmgen/apis/inference.py 47.31% <81.81%> (-1.11%) ⬇️
... and 18 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f8d6f2e...736a71b. Read the comment docs.

@nbei nbei requested a review from plyfager September 9, 2021 05:42
mmgen/core/evaluation/evaluation.py Outdated Show resolved Hide resolved
mmgen/core/evaluation/evaluation.py Outdated Show resolved Hide resolved
mmgen/core/evaluation/metrics.py Outdated Show resolved Hide resolved
mmgen/core/evaluation/metrics.py Outdated Show resolved Hide resolved
@nbei nbei merged commit 0032271 into open-mmlab:master Sep 15, 2021
LeoXing1996 added a commit that referenced this pull request Jul 16, 2022
… models (#108)

* support KLD metric

* add init under tests

* solve import error

* add manually type convert for pt<1.8

* remove an invalid element from evaluation. init

* fix by comment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants