A feature analysis for dimension reduction based on a data generation model
with class factors and environment factors
Minkook Cho, Hyeyoung Park
School of Electrical Engineering and Computer Science, Kyungpook National University, Daegu, South Korea
Received 11 July 2008
Accepted 1 June 2009
Available online 6 June 2009
PCA (principal component analysis)
LDA (linear discriminant analysis)
Data generation model
Currently, high-dimensional data such as image data is widely used in the domain of pattern classiﬁca-
tion and signal processing. When using high-dimensional data, feature analysis methods such as PCA
(principal component analysis) and LDA (linear discriminant analysis) are usually required in order to
reduce memory usage or computational complexity as well as to increase classiﬁcation performance.
We propose a feature analysis method for dimension reduction based on a data generation model that
is composed of two types of factors: class factors and environment factors. The class factors, which are
prototypes of the classes, contain important information required for discriminating between various
classes. The environment factors, which represent distortions of the class prototypes, need to be dimin-
ished for obtaining high class separability. Using the data generation model, we aimed to exclude envi-
ronment factors and extract low-dimensional class factors from the original data. By performing
computational experiments on artiﬁcial data sets and real facial data sets, we conﬁrmed that the pro-
posed method can efﬁciently extract low-dimensional features required for classiﬁcation and has a better
performance than the conventional methods.
Crown Copyright Ó 2009 Published by Elsevier Inc. All rights reserved.
One of the major problems in the pattern recognition ﬁeld is iden-
tifying a method to utilize high-dimensional data such as image
data. When image data with n Â m pixels is used in pattern recogni-
tion problems, it is often represented by an nm-dimensional vector
in which each element represents the intensity of each pixel. When
the dimensions of the data are in the order of hundreds or even thou-
sands of pixels, the processing time increases signiﬁcantly. More-
over, unnecessary information included in the high-dimensional
raw data mightinterferewith the processes that follow data process-
ing. In order to solve this problem, feature analysis methods should
be used to extract low-dimensional features that contain essential
information in the high-dimensional input data. By using feature
analysis methods, we can reduce memory usage and computational
complexity. Moreover, classiﬁcation performance can be improved
by eliminating unnecessary information and classifying only with
the essential information.
Feature analysis methods used for image data classiﬁcation are
of two types: statistical approach and local approach. Statistical
feature analysis methods such as PCA (principal component analy-
sis ) extract global features that can explain some of the statis-
tical characteristics of the given data set from holistic images.
These features are used in the template matching process for clas-
siﬁcation, which needs to be preceded by appropriate preprocess-
ing such as size normalization. Local feature analysis methods such
as SIFT (scale invariant feature transform [2,3]) extract local fea-
ture patches from image data and represent whole images in terms
of the geometrical relations between the features. Local
approaches are motivated for image data, and its features are ro-
bust to image variations such as scale, location and illumination.
Despite the strong robustness of geometrical local features to im-
age variations, global statistical features can also provide useful
information, which cannot be captured by the local features, per-
taining to the distribution of data sets. Due to the dissimilarities
in the properties of the two methods, hybrid methods such as
PCA–SIFT  have been developed. In this paper, we propose a glo-
bal statistical feature extraction method that can be used to clas-
sify general high-dimensional data including image data; it can
also be combined with various local feature analysis methods to
obtain a more sophisticated image data classiﬁcation system.
The most well-known statistical feature extraction methods are
PCA and LDA, which utilize a linear transformation matrix to
obtain a low-dimensional linear subspace satisfying speciﬁc condi-
tions. PCA [1,5,6] extracts low-dimensional features (principal
components) that minimize the squared reconstruction error.
Although PCA has numerous applications such as face recognition
[7,8], it is unsupervised, which means that it does not use class-la-
bel information for identifying features. This might cause some loss
1077-3142/$ - see front matter Crown Copyright Ó 2009 Published by Elsevier Inc. All rights reserved.
This work was supported by KOSEF (Korea Science and Engineering Foundation)
under Project Code R01-2007-000-20792-0.
* Corresponding author.
E-mail addresses: email@example.com (M. Cho), firstname.lastname@example.org (H. Park).
Computer Vision and Image Understanding 113 (2009) 1005–1016
Contents lists available at ScienceDirect
Computer Vision and Image Understanding
journal homepage: www.elsevier.com/locate/cviu