The benchmark method will be publicly available upon publication!
Autonomous valet parking systems eliminae the need for human drivers to find parking slots, reducing the hassle associated with parking in congested areas. Fisheye imags provise valuable information over a large area instantaneously; nevertheless, no current dataset captures the complexity of parking scenes at the level of granularity required by real-world applications. To address this, we introduce ParkScapes, an fisheye image dataset with highly-accurate, fine-grained annotation for corner-based parking slot labeling. ParkScape provides annotation for 10,000 images, covering a variety of diverse scanarios, including shopping malls, industrial parks, and communities. Please cite if you use it in your work!
- [2024/03/04] We have released the ParkScape, you can download the dataset from here.
- Python 3.8
- Pytorch 1.11.0
- CUDA 11.3 or higher
First, install dependencies
# clone project
git clone https://github.com/Vipermdl/ParkScape
# install project
cd ParkScape
pip install -r requirements.txt
To run the evaluation process, you need to download the model weights
wget -q https://github.com/Vipermdl/releases/download/v0.1.0-alpha/parkscape_detector.pth
Inference with detect.py
python detect.py --weights parkscape_detector.pth --source 0 # webcam
img.jpg # image
vid.mp4 # video
screen # screenshot
path/ # directory
list.txt # list of images
list.streams # list of streams
'path/*.jpg' # glob
'https://youtu.be/LNwODJXcvt4' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
After the model and dataset download automatically, training time for the parking slot detector are 2 days on a NVIDIA 3090 GPU (Multi-GPU times faster). Use the largest --batch-size
possible, or pass --batch-size -1
for detector AutoBatch.
python train.py --data parkscape.yaml --epochs 300 --cfg parking_slot_detector.yaml --batch-size 16
Method | Backbone | AP_{50} | AP_{75} | AP | AP_{M} | FPS |
---|---|---|---|---|---|---|
CID | HRNet-W32 | 49.9 | 46.3 | 43.9 | 46.7 | 15.46 |
DEKR | HRNet-W32 | 48.4 | 45.3 | 43.3 | 46.3 | 16.56 |
Associative Embedding | HRNet-W32 | 52.9 | 43.9 | 43.8 | 48.0 | 5.854 |
CenterNet | DLA-34 | 51.4 | 47.5 | 44.9 | 48.5 | 52.63 |
Our | CSPDarkNet53 | 55.1 | 50.9 | 47.0 | 48.1 | 54.05 |
Contributions are always welcome!
Distributed under the no License. See LICENSE.txt for more information.
Dongliang Ma - @dongliangma1 - mdl.viper@gmail.com
Project Link: https://github.com/Vipermdl/ParkScape
If ParkScape is useful or relevant to your research, please kindly recognize our contributions by citing our paper:
@ARTICLE{fu2024parkscape,
author={Fu, Li and Ma, Dongliang and Qu, Xin and Jiang, Xin and Shan, Lie and Zeng, Dan},
journal={IEEE Transactions on Instrumentation and Measurement},
title={ParkScape: A Large-Scale Fisheye Dataset for Parking Slot Detection and a Benchmark Method},
year={2024},
volume={73},
number={},
pages={1-13},
keywords={Cameras;Distortion;Autonomous vehicles;Detectors;Convolution;Lighting;Annotations;Autonomous driving;cameras;datasets;fisheye images;parking slot detection},
doi={10.1109/TIM.2024.3406840}}