From 7c80a8def30a1229c7a18bab4b7b3114832ede1e Mon Sep 17 00:00:00 2001 From: Xintao Date: Thu, 8 Nov 2018 16:42:42 +0800 Subject: [PATCH] Create QA.md --- QA.md | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) create mode 100644 QA.md diff --git a/QA.md b/QA.md new file mode 100644 index 0000000..fafbc3f --- /dev/null +++ b/QA.md @@ -0,0 +1,32 @@ +# Frequently Asked Questions + +### 1. How to reproduce your results in the [PIRM18-SR Challenge](https://www.pirm2018.org/PIRM-SR.html) (with low perceptual index)? + +First, the released ESRGAN model in the GitHub (`RRDB_ESRGAN_x4.pth`) is **different** from the model we submitted in the competition. +We found that the lower perceptual index does not always guarantee a better visual quality. +The aims for the competition and our ESRGAN work will be a bit different. +We think the aim for the competition is the lower perceptual index and the aim for our ESRGAN work is the better visual quality. + +Therefore, in the PIRM18-SR Challenge competition, we used several tricks for the best perceptual index (see Section 4.5 in the [paper](https://arxiv.org/abs/1809.00219)). + +Here, we provid the models and codes used in the competition, which is able to produce the results on the `PIRM test dataset` (we use MATLAB 2016b/2017a): + +| Group | Perceptual index | RMSE | +| ------------- |:-------------:| -----:| +| SuperSR | 1.978 | 15.30 | + +> 1. Download the model and codes from [GoogleDrive](https://drive.google.com/file/d/1l0gBRMqhVLpL_-7R7aN-q-3hnv5ADFSM/view?usp=sharing) +> 2. Put LR input images in the `LR` folder +> 3. Run `python test.py` +> 4. Run `main_reverse_filter.m` in MATLAB as a post processing +> 5. The results on my computer are: Perceptual index: **1.9777** and RMSE: **15.304** + + +### 2. How do you get the perceptual index in your ESRGAN paper? +In our paper, we provide the perceptual index in two places. + +1). In the Fig. 2, the perceptual index on PIRM self validation dataset is obtained with the **model we submitted in the competition**. +Since the pupose of this figure is to show the perception-distortion plane. And we also use the post-precessing here same as in the competition. + +2). In the Fig.7, the perceptual indexs are provided as references and they are tested on the data generated by the released ESRGAN model `RRDB_ESRGAN_x4.pth` in the GiuHub. +Also, there is **no** post-processing when testing the ESRGAN model for better visual quality.