Projection pursuit

Projection pursuit (PP) is a type of statistical technique which involves finding the most "interesting" possible projections in multidimensional data. Often, projections which deviate more from a normal distribution are considered to be more interesting. As each projection is found, the data are reduced by removing the component along that projection, and the process is repeated to find new projections; this is the "pursuit" aspect that motivated the technique known as matching pursuit.[1][2]

The idea of projection pursuit is to locate the projection or projections from high-dimensional space to low-dimensional space that reveal the most details about the structure of the data set. Once an interesting set of projections has been found, existing structures (clusters, surfaces, etc.) can be extracted and analyzed separately.

Projection pursuit has been widely used for blind source separation, so it is very important in independent component analysis. Projection pursuit seeks one projection at a time such that the extracted signal is as non-Gaussian as possible.[3]

History[4]

Projection pursuit technique were originally proposed and experimented by Kruskal.[5] Related ideas occur in Switzer (1970) and Switzer and Wright (1971). The first successful implementation is due to Jerome H. Friedman and John Tukey (1974), who named projection pursuit.

The original purpose of projection pursuit was to machine-pick "interesting" low-dimensional projections of a high-dimensional point cloud by numerically maximizing a certain objective function or projection index.

Several years later, Friedman and Stuetzle extended the idea behind projection pursuit and added projection pursuit regression (PPR), projection pursuit classification (PPC), and projection pursuit density estimation (PPDE).

Feature [6]

The most exciting feature of projection pursuit is that it is one of the very few multivariate method able to bypass the "curse of dimensionality" caused by the fact that high-dimensional space is mostly empty. In addition, projection pursuit are able to ignore irrelevant (i.e. noisy and information-poor) variables. This is a distinct advantage over methods based on interpoint distances like minimal spanning trees, multidimensional scaling and most clustering techniques.

Many of the methods of classical multivariate analysis turn out to be special cases of projection pursuit. Examples are principal component analysis and discriminant analysis, and the quartimax and oblimax methods in factor analysis.

One serious drawback of projection pursuit methods is their high demand on computer time.

See also

References

  1. J. H. Friedman and J. W. Tukey (Sep 1974). "A Projection Pursuit Algorithm for Exploratory Data Analysis" (PDF). IEEE Transactions on Computers. C–23 (9): 881–890. doi:10.1109/T-C.1974.224051. ISSN 0018-9340.
  2. M. C. Jones and R. Sibson (1987). "What is Projection Pursuit?". Journal of the Royal Statistical Society, Series A. 150 (1): 1–37. doi:10.2307/2981662. JSTOR 2981662.
  3. James V. Stone(2004); "Independent Component Analysis: A Tutorial Introduction", The MIT Press Cambridge, Massachusetts, London, England; ISBN 0-262-69315-1
  4. P. J. Huber (Jun 1985). "Projection pursuit" (PDF). The Annals of Statistics. 13 (2): 435–475. doi:10.1214/aos/1176349519.
  5. Kruskal, JB. 1969; "Toward a practical method which helps uncover the structure of a set of observations by finding the line transformation which optimizes a new “index of condensation”", Pages 427–440 of: Milton, RC, & Nelder, JA (eds), Statistical computation; New York, Academic Press
  6. P. J. Huber (Jun 1985). "Projection pursuit" (PDF). The Annals of Statistics. 13 (2): 435–475. doi:10.1214/aos/1176349519.
This article is issued from Wikipedia - version of the 7/10/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.