Publication | Closed Access
Dimensionality reduction by random mapping: fast similarity computation for clustering
387
Citations
6
References
2002
Year
Unknown Venue
EngineeringMachine LearningText MiningImage AnalysisInformation RetrievalData ScienceData MiningPattern RecognitionDocument ClassificationRandom MappingPrincipal Component AnalysisDocument ClusteringAutomatic ClassificationSimilarity SearchKnowledge DiscoveryFinal DimensionalityComputer ScienceDimensionality ReductionNonlinear Dimensionality ReductionVector Space ModelData Vectors
When the data vectors are high-dimensional it is computationally infeasible to use data analysis or pattern recognition algorithms which repeatedly compute similarities or distances in the original data space. It is therefore necessary to reduce the dimensionality before, for example, clustering the data. If the dimensionality is very high, like in the WEBSOM method which organizes textual document collections on a self-organizing map, then even the commonly used dimensionality reduction methods like the principal component analysis may be too costly. It is demonstrated that the document classification accuracy obtained after the dimensionality has been reduced using a random mapping method will be almost as good as the original accuracy if the final dimensionality is sufficiently large (about 100 out of 6000). In fact, it can be shown that the inner product (similarity) between the mapped vectors follows closely the inner product of the original vectors.
| Year | Citations | |
|---|---|---|
Page 1
Page 1