Skip to content

Pytorch implementation of our paper Classification-Then-Grounding: Reformulating Video Scene Graphs as Temporal Bipartite Graphs, which is accepted by CVPR2022

Notifications You must be signed in to change notification settings

Dawn-LX/VidSGG-BIG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Classification-Then-Grounding: Reformulating Video Scene Graphs as Temporal Bipartite Graphs

Pytorch implementation of our paper Classification-Then-Grounding: Reformulating Video Scene Graphs as Temporal Bipartite Graphs, which is accepted by CVPR2022.

We also won the 1st place of Video Relation Understanding (VRU) Grand Challenge in ACM Multimedia 2021, with a simplified version of our model.(The code for object tracklets generation is available at here)

Datasets

Download the ImageNet-VidVRD dataset and VidOR dataset, and put them in the following folder as

├── dataloaders
│   ├── dataloader_vidvrd.py
│   └── ...
├── datasets
│   ├── cache                       # cache file for our dataloaders
│   ├── vidvrd-dataset
│   │   ├── train
│   │   ├── test
│   │   └── videos
│   ├── vidor-dataset
│   │   ├── annotation
│   │   └── videos
│   └── GT_json_for_eval
│       ├── VidORval_gts.json       # GT josn for evlauate generated by VidVRD-helper/prepare_gts_for_eval.py
│       └── VidVRDtest_gts.json
├── experiments   
├── models
├── ...

Evaluation:

  1. first generate the GT json file for evaluation

Training (TODO)

the code for training is still being organized (an initial version will be completed before March 28, 2022).

Data to release

  • I3D feature of VidOR train & val around 6G
  • VidOR traj .npy files (OnlyPos) (this has been released, around 12G)
  • VidVRD traj .npy files (with feature) around 20G
  • cache file for train & val (for vidor)
    • v9 for val (around 15G)
    • v7clsme for train (14 parts, around 130G in total)
  • do not release cache file for vidvrd (they can generate them using VidVRD traj .npy files)

Citation

If our work is helpful for your research, please cite our publication:

@inproceedings{gao2021classification,
  title={Classification-Then-Grounding: Reformulating Video Scene Graphs as Temporal Bipartite Graphs},
  author={Gao, Kaifeng and Chen, Long and Niu, Yulei and Shao, Jian and Xiao, Jun},
  booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  year={2022}
}

TODO

  • add code for training
  • add explanation for some term, e.g., "proposal" "use_pku"
  • change the term slots to bins
  • Explain the EntiNameEmb and classeme and avg_clsme
  • explain the format of TrajProposal's feature, e.g., traj_classeme = traj_features[:,:,self.dim_feat:]
  • clean up utils_func
  • All scores are truncated to 4 decimal places (not rounded)

About

Pytorch implementation of our paper Classification-Then-Grounding: Reformulating Video Scene Graphs as Temporal Bipartite Graphs, which is accepted by CVPR2022

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages