› Publications
› Software

Cite Details

Andy Fraser, Nick Hengartner, Kevin Vixie and Brendt Wohlberg, "Incorporating invariants in Mahalanobis distance based classifiers: Application to Face Recognition", in International Joint Conference on Neural Networks (IJCNN), (Portland, OR, USA), doi:10.1109/IJCNN.2003.1224070, Jul 2003


We present a technique for combining prior knowledge about transformations that should be ignored with a covariance matrix estimated from training data to make an improved Mahalanobis distance classifier. Modern classification problems often involve objects represented by high-dimensional vectors or images (for example, sampled speech or human faces). The complex statistical structure of these representations is often difficult to infer from the relatively limited training data sets that are available in practice.

Thus, we wish to efficiently utilize any available a priori information, such as transformations of the representations with respect to which the associated objects are known to retain the same classification (for example, spatial shifts of an image of a handwritten digit do not alter the identity of the digit).

These transformations, which are often relatively simple in the space of the underlying objects, are usually non-linear in the space of the object representation, making their inclusion within the framework of a standard statistical classifier difficult. Motivated by prior work of Simard et al., we have constructed a new classifier which combines statistical information from training data and linear approximations to known invariance transformations. When tested on a face recognition task, performance was found to exceed by a significant margin that of the best algorithm in a reference software distribution.

BibTeX Entry

author = {Andy Fraser and Nick Hengartner and Kevin Vixie and Brendt Wohlberg},
title = {Incorporating invariants in {M}ahalanobis distance based classifiers: Application to Face Recognition},
year = {2003},
month = Jul,
urlpdf = {},
booktitle = {International Joint Conference on Neural Networks (IJCNN)},
address = {Portland, OR, USA},
doi = {10.1109/IJCNN.2003.1224070}