Skip to content

OpenPCDet Toolbox for LiDAR-based 3D Object Detection. Provides ROS communications implementing a DAMOT pipeline with forked AB3DMOT tracking module.

License

Notifications You must be signed in to change notification settings

Cram3r95/OpenPCDet

 
 

Repository files navigation

OpenPCDet fork

OpenPCDet is a clear, simple, self-contained open source project for LiDAR-based 3D object detection.

It is also the official code release of [PointRCNN], [Part-A^2 net] and [PV-RCNN].

This fork provides the exact version used on Master's Final Project written by Javier del Egido. The project studies state-of-the-art Detection and Multi-Object Tracking (DAMOT) proposals in order to desing a funcional pipeline to be embebbed on Nvidia Jetson AGX Xavier mounted on Techs4AgeCar vehicle developed by Robesafe research group. The fork provides ROS communications from PointCloud2 input to custom AB3DMOT Tracking module in order to set a functional real-time Detection and Multi-Object Tracking (DAMOT) pipeline.

ROS messages provided by BEV_tracking ROS package are needed.

Overview

Introduction

What does OpenPCDet toolbox do?

Note that we have upgrated PCDet from v0.1 to v0.2 with pretty new structures to support various datasets and models.

OpenPCDet is a general PyTorch-based codebase for 3D object detection from point cloud. It currently supports multiple state-of-the-art 3D object detection methods with highly refactored codes for both one-stage and two-stage 3D detection frameworks.

Based on OpenPCDet toolbox, we win the Waymo Open Dataset challenge in 3D Detection, 3D Tracking, Domain Adaptation three tracks among all LiDAR-only methods, and the Waymo related models will be released to OpenPCDet soon.

We are actively updating this repo currently, and more datasets and models will be supported soon. Contributions are also welcomed.

Model Zoo

KITTI 3D Object Detection Baselines

Selected supported methods are shown in the below table. The results are the 3D detection performance of moderate difficulty on the val set of KITTI dataset.

  • All models are trained with 8 GTX 1080Ti GPUs and are available for download.
  • The training time is measured with 8 TITAN XP GPUs and PyTorch 1.5.
training time Car Pedestrian Cyclist download
PointPillar ~1.2 hours 77.28 52.29 62.68 model-18M
SECOND ~1.7 hours 78.62 52.98 67.15 model-20M
PointRCNN ~3 hours 78.70 54.41 72.11 model-16M
PointRCNN-IoU ~3 hours 78.75 58.32 71.34 model-16M
Part-A^2-Free ~3.8 hours 78.72 65.99 74.29 model-226M
Part-A^2-Anchor ~4.3 hours 79.40 60.05 69.90 model-244M
PV-RCNN ~5 hours 83.61 57.90 70.47 model-50M

NuScenes 3D Object Detection Baselines

All models are trained with 8 GTX 1080Ti GPUs and are available for download.

mATE mASE mAOE mAVE mAAE mAP NDS download
PointPillar-MultiHead 33.87 26.00 32.07 28.74 20.15 44.63 58.23 model-23M
SECOND-MultiHead (CBGS) 31.15 25.51 26.64 26.26 20.46 50.59 62.29 model-35M

Installation

Please refer to INSTALL.md for the installation of OpenPCDet.

Quick Demo

Please refer to DEMO.md for a quick demo to test with a pretrained model and visualize the predicted results on your custom data or the original KITTI data.

Getting Started

Please refer to GETTING_STARTED.md to learn more usage about this project.

To directly inference on ROS point cloud, run:

$ python3.6 inference.py

You may need to adjust input point cloud name at the end of code. Also some parameters can be tuned:

  • Movelidartocenter (in meters): moves point cloud along X-axis in order to inference over 360º. By default, detection grid only applies over front point cloud.
  • Threshold (0 to 1 float): sets score minimum threshold for objects to be published as detected.

Docker (September 2020 Update)

If you prefer to install OpenPCDet as a Docker image, you can download provided image on link.

Install docker image by using: docker load --input OpenPCDet.tar

DAMOT Docker for Nvidia Jetson AGX Xavier (September 2020 Update)

A full Detection and Multi-Object Tracking (combined with forked AB3DMOT) developed for ARM Nvidia Jetson AGX Xavier can be downloaded from link.

It provides a complete pipeline, using ROS PointCloud2 as input and producing tracked objects as output as in AB3DMOT, using ROS bev_tracking package also installed.

License

OpenPCDet is released under the Apache 2.0 license.

Acknowledgement

OpenPCDet is an open source project for LiDAR-based 3D scene perception that supports multiple LiDAR-based perception models as shown above. Some parts of PCDet are learned from the official released codes of the above supported methods. We would like to thank for their proposed methods and the official implementation.

We hope that this repo could serve as a strong and flexible codebase to benefit the research community by speeding up the process of reimplementing previous works and/or developing new methods.

OpenPCDet design pattern

  • Data-Model separation with unified point cloud coordinate for easily extending to custom datasets:

  • Unified 3D box definition: (x, y, z, dx, dy, dz, heading).

  • Flexible and clear model structure to easily support various 3D detection models:

  • Support various models within one framework as:

Currently Supported Features

  • Support both one-stage and two-stage 3D object detection frameworks
  • Support distributed training & testing with multiple GPUs and multiple machines
  • Support multiple heads on different scales to detect different classes
  • Support stacked version set abstraction to encode various number of points in different scenes
  • Support Adaptive Training Sample Selection (ATSS) for target assignment
  • Support RoI-aware point cloud pooling & RoI-grid point cloud pooling
  • Support GPU version 3D IoU calculation and rotated NMS

Citation

If you find this project useful in your research, please consider cite:

@inproceedings{shi2020pv,
  title={Pv-rcnn: Point-voxel feature set abstraction for 3d object detection},
  author={Shi, Shaoshuai and Guo, Chaoxu and Jiang, Li and Wang, Zhe and Shi, Jianping and Wang, Xiaogang and Li, Hongsheng},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={10529--10538},
  year={2020}
}


@article{shi2020points,
  title={From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network},
  author={Shi, Shaoshuai and Wang, Zhe and Shi, Jianping and Wang, Xiaogang and Li, Hongsheng},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2020},
  publisher={IEEE}
}

@inproceedings{shi2019pointrcnn,
  title={PointRCNN: 3d Object Progposal Generation and Detection from Point Cloud},
  author={Shi, Shaoshuai and Wang, Xiaogang and Li, Hongsheng},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={770--779},
  year={2019}
}

Contact

The original project is currently maintained by Shaoshuai Shi (@sshaoshuai) and Chaoxu Guo (@Gus-Guo).

This fork is maintained by Javier del Egido and Robesafe research group

About

OpenPCDet Toolbox for LiDAR-based 3D Object Detection. Provides ROS communications implementing a DAMOT pipeline with forked AB3DMOT tracking module.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 82.4%
  • Cuda 10.6%
  • C++ 6.8%
  • Other 0.2%