This Springer Brief represents a comprehensive review of information theoretic methods for robust recognition. A variety of information theoretic methods have been proffered in the past decade, in a large variety of computer vision applications; this work brings them together, attempts to impart the theory, optimization and usage of information entropy.
The authors resort to a new information theoretic concept, correntropy, as a robust measure and apply it to solve robust face recognition and object recognition problems. For computational efficiency, the brief introduces the additive and multiplicative forms of half-quadratic optimization to efficiently minimize entropy problems and a two-stage sparse presentation framework for large scale recognition problems. It also describes the strengths and deficiencies of different robust measures in solving robust recognition problems.
Innehållsförteckning
Introduction.- M-estimators and Half-quadratic Minimization.- Information Measures.- Correntropy and Linear Representation.- ℓ1 Regularized Correntropy.- Correntropy with Nonnegative Constraint.