Publication | Closed Access
Sparse Subspace Clustering: Algorithm, Theory, and Applications
3.1K
Citations
60
References
2013
Year
Sparse RepresentationImage AnalysisMachine LearningData ScienceData MiningPattern RecognitionEngineeringKnowledge DiscoverySparse Representation CorrespondsMultilinear Subspace LearningUnsupervised Machine LearningComputer ScienceDimensionality ReductionSparse Subspace ClusteringLow-rank ApproximationFace Clustering
Many real‑world data sets are high‑dimensional yet lie near low‑dimensional subspaces that correspond to distinct classes. This work introduces sparse subspace clustering, an algorithm designed to partition data points that belong to a union of low‑dimensional subspaces. By seeking a sparse representation of each point using only points from its own subspace, the method solves a convex relaxation of a sparse optimization problem and then applies spectral clustering to recover the subspace memberships. The algorithm is efficient, robust to intersections, noise, outliers, and missing entries, and its effectiveness is demonstrated on synthetic data, motion segmentation, and face clustering experiments.
Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering.
| Year | Citations | |
|---|---|---|
Page 1
Page 1