Feature extraction and matching provide the basis of many methods for object registration, modeling, retrieval, and recognition. However, this approach typically introduces false matches, due to lack of features, noise, occlusion, and cluttered backgrounds. In registration, these false matches lead to inaccurate estimation of the underlying transformation that brings the overlapping shapes into best possible alignment. In this paper, we propose a novel boosting-inspired method to tackle this challenging task. It includes three key steps: (i) underlying transformation estimation in the weighted least squares sense, (ii) boosting parameter estimation and regularization via Tsallis entropy, and (iii) weight re-estimation and regularization via Shannon entropy and update with a maximum fusion rule. The process is iterated. The final optimal underlying transformation is estimated as a weighted average of the transformations estimated from the latest iterations, with weights given by the boosting parameters. A comparative study based on real shape data shows that the proposed method outperforms four other state-of-the-art methods for evaluating the established point matches, enabling more accurate and stable estimation of the underlying transformation.
All Science Journal Classification (ASJC) codes
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence
Liu, Y., Liu, H., Martin, R. R., De Dominicis, L., Song, R., & Zhao, Y. (2016). Accurately estimating rigid transformations in registration using a boosting-inspired mechanism. Pattern Recognition, 60, 849 - 862. https://doi.org/10.1016/j.patcog.2016.07.011