This is the official PyTorch implementation of the paper Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation.
git clone git@github.com:ut-vision/Rot-MVGaze.git
cd Rot-MVGaze
pip install -r requirements.txt
Please download the normalized XGaze_224 from the official website.
Please refer to Learning-by-Novel-View-Synthesis for Full-Face Appearance-Based 3D Gaze Estimation or directly contact us for the data synthesis.
create configs/data_path.yaml
xgaze: <path to xgaze>
mpiinv: <path to mpiinv>
xgaze2mpiinv_known
xgaze2mpiinv_novel
mpiinv2xgaze_known
mpiinv2xgaze_novel
python main.py \
--exp_name <exp_name> \
--mode train \
Download the pretrained checkpoints and run
Experiment | Model | Path |
---|---|---|
XGaze to MPII-NV (known head pose) | Rot-MV | Google Drive |
XGaze to MPII-NV (novel head pose) | Rot-MV | Google Drive |
MPII-NV to XGaze (known head pose) | Rot-MV | Google Drive |
MPII-NV to XGaze (novel head pose) | Rot-MV | Google Drive |
python main.py \
--exp_name <exp_name> \
--mode test --ckpt_pretrained <path to the ckpt>
@inproceedings{hisadome2024rotation,
title={Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation},
author={Hisadome, Yoichiro and Wu, Tianyi and Qin, Jiawei and Sugano, Yusuke},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={5985--5994},
year={2024}
}
Jiawei Qin: jqin@iis.u-tokyo.ac.jp