Important: if you're interested in producing results based on a model, you will probably want to use the companion repository iNNfer, a GUI (for ESRGAN models, for video) or a smaller repo for inference (for ESRGAN, for video).
Otherwise, if you are interested in obtaining results that can automatically return evaluation metrics (to compare with papers' results), it is also possible to do inference of batches of images and also use some additional options (such as CEM, geometric self-ensemble or automatic cropping of images before upscale for VRAM limited environment) with the code in this repository as follow.
- Modify the configuration file
options/sr/test_sr.yml
(oroptions/sr/test_sr.json
) - Run command:
python test.py -opt options/sr/test_sr.yml
(orpython test.py -opt options/sr/test_sr.json
)
- Modify the configuration file
options/srflow/test_srflow.yml
- Run command:
python test_srflow.py -opt options/srflow/test_srflow.yml
- Obtain the segmentation probability maps:
python test_seg.py
- Run command:
python test_sftgan.py
- Modify the configuration file
options/video/test_video.yml
- Run command:
python test_vsr.py -opt options/video/test_video.yml
While it is possible to use the same steps as with the Super-Resolution models, it is recommended to use iNNfer for these cases.
In the case of pix2pix with the original configuration, since batch normalization using the statistics of the test batch is used (rather than aggregated statistics of the training batch, i.e., use model.train()
mode), it will produce slightly different inference results every time. Try both with model.train()
on and off to compare results.