Implementation of ICCV 2021 Paper A Unified 3D Human Motion Synthesis Model via Conditional Variational Auto-Encoder
Given one masked pose seqyebce, the proposed model is able to generate plausible results.
This code was tested with Pytoch 1.6.0, CUDA 10.1, Python 3.7 and Ubuntu 16.04
- Clone this repo:
git clone https://github.com/vanoracai/A-Unified-3D-Human-Motion-Synthesis-Model-via-Conditional-Variational-Auto-Encoder.git
cd unified_pose
download and save the file in ./data folder
human3.6m
: link
- Train a model using hm3.6 dataset: see run.sh for more details. Example: train on hm36 dataset without action label
python train_pose.py --config ./config/hm36/non_action_hm36.yaml
- Set
--mask_type
and--mask_weights
in options/base_options.py for different training masks. - Training models will be saved under the saved_files/checkpoints folder.
- Images and videos of training & testing will be founf under the saved_files/saved_imgs/ and saved_files/saved_videos/ folders
- The more options can be found in options folder.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
This software is for educational and academic research purpose only. If you wish to obtain a commercial royalty bearing license to this software, please contact us at yujun001@e.ntu.edu.sg.
If you use this code for your research, please cite our paper.
@inproceedings{cai2021unified,
title={A unified 3d human motion synthesis model via conditional variational auto-encoder},
author={Cai, Yujun and Wang, Yiwei and Zhu, Yiheng and Cham, Tat-Jen and Cai, Jianfei and Yuan, Junsong and Liu, Jun and Zheng, Chuanxia and Yan, Sijie and Ding, Henghui and others},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={11645--11655},
year={2021}
}