From 08a2fc7758767926c1e004ae7d4dc9fb4621ce59 Mon Sep 17 00:00:00 2001 From: Yifei Yang Date: Wed, 13 Oct 2021 12:32:28 +0800 Subject: [PATCH] [Fix] Equation Presentation of Docs (#133) * fix equation show * check Co-authored-by: yangyifei --- docs/quick_run.md | 8 ++++---- src/pytorch-sphinx-theme | 1 + 2 files changed, 5 insertions(+), 4 deletions(-) create mode 160000 src/pytorch-sphinx-theme diff --git a/docs/quick_run.md b/docs/quick_run.md index adffb7d10..d00c411db 100644 --- a/docs/quick_run.md +++ b/docs/quick_run.md @@ -292,16 +292,16 @@ We also perform a survey on the influence of data loading pipeline and the versi ## PPL Perceptual path length measures the difference between consecutive images (their VGG16 embeddings) when interpolating between two random inputs. Drastic changes mean that multiple features have changed together and that they might be entangled. Thus, a smaller PPL score appears to indicate higher overall image quality by experiments. \ As a basis for our metric, we use a perceptually-based pairwise image distance that is calculated as a weighted difference between two VGG16 embeddings, where the weights are fit so that the metric agrees with human perceptual similarity judgments. -If we subdivide a latent space interpolation path into linear segments, we can define the total perceptual length of this segmented path as the sum of perceptual differences over each segment, and a natural definition for the perceptual path length would be the limit of this sum under infinitely fine subdivision, but in practice we approximate it using a small subdivision . +If we subdivide a latent space interpolation path into linear segments, we can define the total perceptual length of this segmented path as the sum of perceptual differences over each segment, and a natural definition for the perceptual path length would be the limit of this sum under infinitely fine subdivision, but in practice we approximate it using a small subdivision ``$`\epsilon=10^{-4}`$``. The average perceptual path length in latent `space` Z, over all possible endpoints, is therefore - +``$$`L_Z = E[\frac{1}{\epsilon^2}d(G(slerp(z_1,z_2;t))), G(slerp(z_1,z_2;t+\epsilon)))]`$$`` Computing the average perceptual path length in latent `space` W is carried out in a similar fashion: - +``$$`L_Z = E[\frac{1}{\epsilon^2}d(G(slerp(z_1,z_2;t))), G(slerp(z_1,z_2;t+\epsilon)))]`$$`` -Where , and if we set `sampling` to full, if we set `sampling` to end. is the generator(i.e. for style-based networks), and evaluates the perceptual distance between the resulting images.We compute the expectation by taking 100,000 samples (set `num_images` to 50,000 in our code). +Where ``$`z_1, z_2 \sim P(z)`$``, and ``$` t \sim U(0,1)`$`` if we set `sampling` to full, ``$` t \in \{0,1\}`$`` if we set `sampling` to end. ``$` G`$`` is the generator(i.e. ``$` g \circ f`$`` for style-based networks), and ``$` d(.,.)`$`` evaluates the perceptual distance between the resulting images.We compute the expectation by taking 100,000 samples (set `num_images` to 50,000 in our code). You can find the complete implementation in `metrics.py`, which refers to https://github.com/rosinality/stylegan2-pytorch/blob/master/ppl.py. If you want to evaluate models with `PPL` metrics, you can add the `metrics` into your config file like this: diff --git a/src/pytorch-sphinx-theme b/src/pytorch-sphinx-theme new file mode 160000 index 000000000..b41e564f8 --- /dev/null +++ b/src/pytorch-sphinx-theme @@ -0,0 +1 @@ +Subproject commit b41e564f8c4cf85a5382f08eed2807b606b1fa0c