Skip to content

Commit

Permalink
update two recent preprint
Browse files Browse the repository at this point in the history
  • Loading branch information
ImKeTT committed Dec 23, 2022
1 parent 4d43260 commit 5c0f48f
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,8 @@ List format follows:
3. **ICML (UCLA)** / [Latent Diffusion Energy-Based Model for Interpretable Text Modeling](https://arxiv.org/pdf/2206.05895.pdf) / **G2T**, use diffusion process on latent space with prior sampling with EBM, variational bayes for latent posterior approximation. Similar paradigm of [S-VAE](https://arxiv.org/pdf/1406.5298.pdf) to deal with labels in semi-supervision. / [Code](https://github.com/yuPeiyu98/LDEBM)
4. **KBS (Tsinghua)** / [PCAE: A Framework of Plug-in Conditional Auto-Encoder for Controllable Text Generation](https://www.sciencedirect.com/science/article/pii/S0950705122008942) / **G2T**, invent *Broadcasting Net* to repeatly add control signals into latent space to create a concentrate and manipulable latent space in VAE. Experimenced on both RNN and BART VAE models. / [Code](https://github.com/ImKeTT/pcae)
5. **Arxiv (CUHK)** / [Composable Text Controls in Latent Space with ODEs](https://arxiv.org/abs/2208.00638) / **G2T**, employs diffusion process in the latent space based on adaptive GPT-2 VAE (similar to [AdaVAE](https://arxiv.org/abs/2205.05862)), the diffusion process transfer latent distribution from Gaussian to controlled one. Few parameters and data are used for training. / [Code](https://github.com/guangyliu/LatentOps)
6. **Arxiv (Cornell)** / [Latent Diffusion for Language Generation](https://arxiv.org/pdf/2212.09462.pdf) / **G2T**, use class-conditional diffusion process on the continuous space between encoder and decoder of a pre-trained encoder-decoder LM (e.g., BART) . / [Code](https://github.com/justinlovelace/latent-diffusion-for-language)
7. **Arxiv (UBC)** / [DuNST: Dual Noisy Self Training for Semi-Supervised Controllable Text Generation](https://arxiv.org/pdf/2212.08724.pdf) / **G2T**, a duel VAE model with generative and classification components trained jointly, augment the controllable generation ability by producing pseudo data labels and pseudo textual instances / Nan

### 2021

Expand Down

0 comments on commit 5c0f48f

Please sign in to comment.