Skip to content

PyTorch implementation of Variational Auto-encoder

Notifications You must be signed in to change notification settings

shib0li/VAE-torch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Variational Auto-encoder with PyTorch

This is a light implementation of the Variational Auto-encoder(VAE) with PyTorch and tested on MNIST dataset.

System Requirement

The code is tested with python 3.7.7 on Ubuntu 18.04. The torch version installed is 1.3.1.

Run

See the Demo.ipynb to find the running configuration options in details.

loss

Experiment 1

The first experiment conducted is to test the images reconstruction. A batch of 64 images are drawn from the testing dataset, first pass to the encoder to acquire their latent encodings, then pass to the decoder to see if the VAE could recover the original images properly.

ground truth of testing

ground_truth

reconstruction from decoder of testing

recover

Experiment 2

Generate artificial images from standard Gaussian noise. We can see the 'fake' generated images are reasonable. This reveals an important property of VAE which is distribution transformation. VAE transform from a simple (standard Gaussian) distribution to a very complicated distribution exsits in MNIST.

noise

Reference

About

PyTorch implementation of Variational Auto-encoder

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published