Skip to content

Latest commit

 

History

History
 
 

bisenetv2

BiSeNetV2

Bisenet v2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation

Introduction

Official Repo

Code Snippet

Abstract

The low-level details and high-level semantics are both essential to the semantic segmentation task. However, to speed up the model inference, current approaches almost always sacrifice the low-level details, which leads to a considerable accuracy decrease. We propose to treat these spatial details and categorical semantics separately to achieve high accuracy and high efficiency for realtime semantic segmentation. To this end, we propose an efficient and effective architecture with a good trade-off between speed and accuracy, termed Bilateral Segmentation Network (BiSeNet V2). This architecture involves: (i) a Detail Branch, with wide channels and shallow layers to capture low-level details and generate high-resolution feature representation; (ii) a Semantic Branch, with narrow channels and deep layers to obtain high-level semantic context. The Semantic Branch is lightweight due to reducing the channel capacity and a fast-downsampling strategy. Furthermore, we design a Guided Aggregation Layer to enhance mutual connections and fuse both types of feature representation. Besides, a booster training strategy is designed to improve the segmentation performance without any extra inference cost. Extensive quantitative and qualitative evaluations demonstrate that the proposed architecture performs favourably against a few state-of-the-art real-time semantic segmentation approaches. Specifically, for a 2,048x1,024 input, we achieve 72.6% Mean IoU on the Cityscapes test set with a speed of 156 FPS on one NVIDIA GeForce GTX 1080 Ti card, which is significantly faster than existing methods, yet we achieve better segmentation accuracy.

Citation

@article{yu2021bisenet,
  title={Bisenet v2: Bilateral network with guided aggregation for real-time semantic segmentation},
  author={Yu, Changqian and Gao, Changxin and Wang, Jingbo and Yu, Gang and Shen, Chunhua and Sang, Nong},
  journal={International Journal of Computer Vision},
  pages={1--18},
  year={2021},
  publisher={Springer}
}

Results and models

Cityscapes

Method Backbone Crop Size Lr schd Mem (GB) Inf time (fps) mIoU mIoU(ms+flip) config download
BiSeNetV2 BiSeNetV2 1024x1024 160000 7.64 31.77 73.21 75.74 config model | log
BiSeNetV2 (OHEM) BiSeNetV2 1024x1024 160000 7.64 - 75.30 77.06 config model | log
BiSeNetV2 (4x8) BiSeNetV2 1024x1024 160000 15.05 - 75.76 77.79 config model | log
BiSeNetV2 (FP16) BiSeNetV2 1024x1024 160000 5.77 36.65 73.07 75.13 config model | log

Note:

  • OHEM means Online Hard Example Mining (OHEM) is adopted in training.
  • FP16 means Mixed Precision (FP16) is adopted in training.
  • 4x8 means 4 GPUs with 8 samples per GPU in training.