Skip to content

Latest commit

 

History

History
111 lines (72 loc) · 9.74 KB

README.md

File metadata and controls

111 lines (72 loc) · 9.74 KB

Papers about controllable text generation (CTG) via latent auto-encoders (AEs). Mainly focus on open-domain sentence generation with some style transfer generation methods (without dialogue generation for now).

Tutorials for Latent AEs

Mostly for Variational Auto-Encoders (VAEs)
  1. Reasearchgate (2020, THU) / The Road from MLE to EM to VAE: A Brief Tutorial / TL;DR
  2. EMNLP (2018, Harvard) / A Tutorial on Deep Latent Variable Models of Natural Language / TL; DR
  3. Arxiv (2016, Carl Doersch) / Tutorial on Variational Autoencoders / Complete and the first VAE tutorial, last updated on Jan. 2021

CTG via Latent AEs Survey Paper List

Paper list of CTG via latent AEs. I categorized all methodologies by their training paradigm (i.e., supervised, semi-supervised, self-supervised).
  • Hard Control: Knowledge/Keyword/Table-Driven controllable generation is denoted as K2T;
  • Soft Control: Globally Sentiment / Tense / Topic controllable generation is denoted as G2T.

List format follows:

Publication info. / paper and link / TL; DR / Code link (if available) / Chinese Blog Link (if available)

Supervised

### 2021
  1. TBD

2020

  1. TBD

2019

  1. EMNLP (Tsinghua) / Long and Diverse Text Generation with Planning-based Hierarchical Variational Model / K2T, 2 latent variable models for keywords assignment plan of every sentence and word generation respectively. / Code
  2. ICASSP (Alibaba) / Improve Diverse Text Generation by Self Labeling Conditional Variational Auto Encoder / K2T,
  3. NeurIPS (PKU) / Controllable Unsupervised Text Attribute Transfer via Editing Entangled Latent Representation / G2T, style transfer generation
  4. Arxiv (Waterloo Univ.) / Stylized Text Generation Using Wasserstein Autoencoders with a Mixture of Gaussian Prior / G2T,

Semi-Supervised

### 2022
  1. ICML (Monash) / Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation / G2T, BERT encoder for overall feature extraction and two different MLP encoder for label and content encoding severally. Used prefix-tuning and GPT-2 decoder for zero/few-shot style transfer generation. / Nan

2021

  1. Arxiv (Buffalo Univ.) / Transformer-based Conditional Variational Autoencoder for Controllable Story Generation / G2T, explored 3 different methods for condition combination with GPT-2 as both encoder and decoder of a text VAE. / Code / Chinese Blog
  2. Arxiv (EPFL) / Bag-of-Vectors Autoencoders For Unsupervised Conditional Text Generation / G2T, style transfer task /
  3. NeurIPS (UCSD) / A Causal Lens for Controllable Text Generation / G2T, the first unified causal framework for text generation under control, introduced Structured Causal Model (SCM) for conditional generation, used counterfactual and intervention causal tools for style transfer and controlled generation tasks respectively. / Nan
  4. EACL (Waterloo Univ) / Polarized-VAE: Proximity Based Disentangled Representation Learning for Text Generation / G2T, style transfer task; proposed to use two separate encoders to encode sentence syntax and semantic information, added a proximity loss (cosine) on latent space to distinguish dissimilar sentences (with different labels) / Code

2020

  1. ACL (Wuhan Univ.) / Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders / G2T, the first "Plug-and-Play" latent AE consists of a pretrain VAE and $n$ plug-in VAE for $n$ given conditions. / Code / Chinese Blog
  2. ACL (Duke) / Improving Disentangled Text Representation Learning with Information-Theoretic Guidance / G2T, explained with variation of information theory. 2 encoders for style and context encoding to produce distinct latents, a discriminator with style label for style latent adversarial learning and a VAE for context learning, concat two latents for controllable generation. / Nan
  3. EMNLP (EPFL) / Plug and Play Autoencoders for Conditional Text Generation / G2T, style transfer task, proposed an 'offset' net to encode
  4. ICLR (ByteDance) / Variational Template Machine For Data-to-Text Generation / K2T, use VAE to generate keyword templates, fill pre-assigned keywords into sampled template. / Code

2019

  1. TBD

2018 and older

  1. NIPS (Michigan Univ.) / Content preserving text generation with attribute controls / G2T, style transfer task
  2. ICML (CMU) / Improved Variational Autoencoders for Text Modeling using Dilated Convolutions / G2T, self-supervised and semi-supervised generation task

Self-Supervised

### 2021
  1. Findings (Manchester Univ.) / Disentangling Generative Factors in Natural Language with Discrete Variational Autoencoders / G2T, model every condition into a discrete latent and uses Gumbel softmax for back-prop. Decomposes KL regularization loss into 3 terms related to disentanglement learning like the one described in TC-VAE / Nan

2020

  1. NeurIPS (UMBC) / A Discrete Variational Recurrent Topic Model without the Reparametrization Trick / G2T, model word-level topic latent codes using continued multiplication approximation, and several auxiliary loss w.r.t. word-level and document-level topic correlation optimization. / Code
  2. ICML (MIT) / Educating Text Autoencoders: Latent Representation Guidance via Denoising / G2T, add noise at input token level to avoid token-latent irrelevance issue of text latent AEs. / Code
  3. ICML(ByteDance) / Dispersed Exponential Family Mixture VAEs for Interpretable Text Generation / G2T, mix exponential family model (1exponential distribution for 1 topic ideally) for VAE prior modeling. / Code / Chinese Blog
  4. ICML (Borealis) / On Variational Learning of Controllable Representations for Text without Supervision / G2T, first identify the latent vacancy issue in text VAE, use GloVe and RNN embedding as two distinct latents ($z_1,z_2$). Imposes orthogonal and reconstructing regularization loss on $z_1$. / Code / Chinese Blog

2019

  1. EMNLP (CAS) / A Topic Augmented Text Generation Model: Joint Learning of Semantics and Structural Features / G2T, model text semantic and structural features via 2 separate VAEs, concat the distinct latent codes for controllable generation. / Chinese Blog
  2. NAACL (Duke) / Topic-Guided Variational Autoencoders for Text Generation / G2T, consists of a latent topic model whose latent is a GMM (each Gaussian is a topic ideally) and modeled by Householder Flow, and a sequence VAE that takes the same latent for generation. / Chinese Blog
  3. EMNLP (Buffalo Univ.) / Implicit Deep Latent Variable Models for Text Generation / G2T, add an auxiliary mutual information between observed data and latent variable based on vanilla text VAE in order to educate a more meaningful latent space. / Code
  4. ACL (Nanjing Univ.) / Generating Sentences from Disentangled Syntactic and Semantic Spaces / G2T,

2018 and older

  1. AISTATS (Duke) / Topic Compositional Neural Language Model / G2T, a VAE to model topic distributions of documents and a muti-expert LSTM network for controllable generation. / Nan
  2. Arxiv (UCSB) / Dirichlet Variational Autoencoder for Text Modeling / G2T, a plain VAE for sequence modeling ,and a VAE parameterized by Dirichlet for topic modeling whose latent posterior is conditioned on the sequence latent. / Chinese Blog