Learning to Separate: Detecting Heavily Occluded Object in Urban Scenes


While visual object detection with deep learning has received much attention in the past decade, cases when heavy intra-class occlusions occur have not been studied thoroughly. In this work, we propose a Non-Maximum-Suppression (NMS) algorithm that dramatically improves the detection recall while maintaining high precision in scenes with heavy occlusions. Our NMS algorithm is derived from a novel embedding mechanism, in which the semantic and geometric features of the detected boxes are jointly exploited. The embedding makes it possible to determine whether two heavily-overlapping boxes belong to the same object in the physical world. Our approach is particularly useful for car detection and pedestrian detection in urban scenes where occlusions often happen. We show the effectiveness of our approach by creating a model called SG-Det (short for Semantics and Geometry Detection) and testing SG-Det on two widely-adopted datasets, KITTI and CityPersons for which it achieves state-of-the-art performance.

Learned Semantics-Geometry Embedding (SGE) for bounding boxes predicted by our proposed detector on KITTI and CityPersons images. Heavily overlapped boxes are separated in the SGE space according to the objects they are assigned to. Thus, distance between SGEs can guide NMS to keep correct boxes in heavy intra-class occlusion scenes.

Link to ArXiv.

title={Learning to Separate: Detecting Heavily-Occluded Objects in Urban Scenes},
author={Yang, Chenhongyi and Ablavsky, Vitaly and Wang, Kaihong and Feng, Qi and Betke, Margrit},
journal={Proc. European Conf. on Computer Vision (ECCV)},