Depth Based Object Detection from Partial Pose Estimation of Symmetric Objects
Knowing the poses of objects before their detection or classification has been shown to improve the results of object detectors. However, a robust and fine estimation of object poses is still challenging. To do just that, we suggest to employ the mirror symmetry of objects, providing a part of the pose information. In our paper, we show how the symmetry of objects in 3D can be robustly detected (providing fine but partial pose information) and used to construct a partial pose invariant representation of objects’ shape, allowing state of the art object detection.
More information will be added following the ECCV2014 conference. In the meantime, you are invited to have a look at our paper described below.
- Barnea E. and Ben-Shahar O., Depth Based Object Detection from Partial Pose Estimation of Symmetric Objects, in the proceedings of the European conference of computer vision (ECCV), 2014.
Code and annotations
- Our annotations of the 3D center points of the objects in the Berkeley 3D object dataset can be found here.
- The symmetry detection code can be found here (last update 20.09.2014). Please note that this does not include the calculation of feature vector.
Who and Where…
This research is a joint work by Ehud Barnea and Ohad Ben-Shahar of the Computer Science Department, Ben-Gurion University of The Negev, Beer Sheva, Israel. This work was presented in the proceedings of the European conference of computer vision (ECCV), 2014.
This research was funded in part by the European Commission in the 7th Framework Programme (CROPS GA no. 246252). We also thank the generous support of the Frankel fund and the ABC Robotics Initiative at Ben-Gurion University.