Skip to content
/ ODMS Public

Object Depth via Motion and Segmentation Dataset

Notifications You must be signed in to change notification settings

griffbr/ODMS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ODMS Dataset

ODMS is the first dataset for learning Object Depth via Motion and Segmentation. ODMS training data are configurable and extensible, with each training example consisting of a series of object segmentation masks, camera movement distances, and ground truth object depth. As a benchmark evaluation, we also provide four ODMS validation and test sets with 15,650 examples in multiple domains, including robotics and driving. In our paper, we use an ODMS-trained network to perform object depth estimation in real-time robot grasping experiments, demonstrating how ODMS is a viable tool for 3D perception from a single RGB camera.

(New) An object detection-based version of the ODMS benchmark is now available here!

Contact: Brent Griffin (griffb at umich dot edu)

Quick Introduction: https://youtu.be/c90Fg_whjpI

IMAGE ALT TEXT HERE

Using ODMS

Run ./demo/demo_datagen.py to generate random ODMS data to train your model.
Example training data configurations are provided in the ./config/ folder. Has the option to save a static dataset.
[native Python, has scipy dependency]

Run ./demo/demo_dataset_eval.py to evaluate your model on the ODMS validation and test sets.
Provides an example evaluation for the VOS-DE baseline. Results are saved in the ./results/ folder.
[native Python, VOS-DE baseline has skimage dependency]

Benchmark

Method Robot Driving Normal Perturb All
DBox 11.5 24.8 11.8 20.3 17.1
ODNlr 13.1 31.7 8.6 17.9 17.8
BoxLS 17.6 33.3 13.7 36.6 25.3
VOS-DE 32.6 36.0 7.9 33.6 27.5

Is your technique missing although it's published and the code is public? Let us know and we'll add it.

Using ODN Method

Run ./demo/demo_odn_train.py to train your own ODN model using ODMS.
Run ./demo/demo_odn_eval.py after training to evaluate your ODN model.
Example training and ODN model configurations are provided in the ./config/ folder. Models are saved in the ./results/model/ folder.
[native Python, has Torch dependency]

Publication

Please cite our paper if you find it useful for your research.

@inproceedings{GrCoECCV20,
  author = {Griffin, Brent A. and Corso, Jason J.},
  booktitle={The European Conference on Computer Vision (ECCV)},
  title = {Learning Object Depth from Camera Motion and Video Object Segmentation},
  year = {2020}
}

ECCV 2020 Presentation: https://youtu.be/ZD4Y4oQbdks

IMAGE ALT TEXT HERE

Use

This code is available for non-commercial research purposes only.

About

Object Depth via Motion and Segmentation Dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published