Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information
preprint
The following external dependencies are required
Dependency | Version(s) known to work |
---|---|
CUDA | <12.1 |
Important
CUDA is used both during training by torch
and to efficiently process viewpoints visibility.
First download this repo and cd learning-where-to-look
. Once inside the folder, you can build/install learning-where-to-look
using pip
pip install .
train: wget ftp://anonymous:@151.100.59.119/learning_where_to_look/train_data_10_meshes.pickle
test: wget ftp://anonymous:@151.100.59.119/learning_where_to_look/test_data_2_meshes.pickle
Run training with the following script; the default is 300 epochs
python3 lwl/apps/training/mlp_train.py --data_path <path-to-training-data.pickle> --test_data_path <path-to-test-data.pickle> --checkpoint_path models/tmp_training
If you use any of this code, please cite our paper - accepted ECCV 2024:
@article{di2024learning,
title={Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information},
author={Di Giammarino, Luca and Sun, Boyang and Grisetti, Giorgio and Pollefeys, Marc and Blum, Hermann and Barath, Daniel},
journal={arXiv preprint arXiv:2407.15593},
year={2024}
}
The repo is currently under huge updates; you can keep track here
Feature/Component | Status |
---|---|
CUDA/C++ compilation | ✅ Completed |
Unit tests | ✅ Completed |
Pybidings | ✅ Completed |
Training | ✅ Completed |
Documentation | |
Preprocessing | |
Custom data setup | |
Inference/plot active map |