Publication | Open Access
Sure Independence Screening for Ultrahigh Dimensional Feature Space
2.7K
Citations
110
References
2008
Year
EngineeringMachine LearningSure Independence ScreeningFeature SelectionVariable SelectionImage AnalysisData SciencePattern RecognitionSure Screening MethodStatisticsSupervised LearningSummary Variable SelectionMachine VisionKnowledge DiscoveryComputer ScienceDimensionality ReductionMedical Image ComputingStatistical Learning TheoryHigh-dimensional MethodHigher Dimensional ProblemStatistical Inference
Variable selection is crucial in high‑dimensional statistical modeling, but as dimensionality grows ultrahigh, methods like the Dantzig selector face challenges due to the logarithmic penalty log(p) and potential failure of the uniform uncertainty principle. The authors aim to develop a sure independence screening method based on correlation learning to reduce ultrahigh dimensionality to a moderate scale below the sample size. Correlation learning, proven to possess the sure screening property even for exponentially growing dimensionality, forms the basis of the proposed method, with an iterative variant introduced to improve finite‑sample performance. The method accurately reduces dimensionality below the sample size, thereby enhancing both speed and accuracy of variable selection when combined with techniques such as SCAD, the Dantzig selector, lasso, or adaptive lasso, and clarifies their interrelationships.
Summary Variable selection plays an important role in high dimensional statistical modelling which nowadays appears in many areas and is key to various scientific discoveries. For problems of large scale or dimensionality p, accuracy of estimation and computational cost are two top concerns. Recently, Candes and Tao have proposed the Dantzig selector using L1-regularization and showed that it achieves the ideal risk up to a logarithmic factor log(p). Their innovative procedure and remarkable result are challenged when the dimensionality is ultrahigh as the factor log(p) can be large and their uniform uncertainty principle can fail. Motivated by these concerns, we introduce the concept of sure screening and propose a sure screening method that is based on correlation learning, called sure independence screening, to reduce dimensionality from high to a moderate scale that is below the sample size. In a fairly general asymptotic framework, correlation learning is shown to have the sure screening property for even exponentially growing dimensionality. As a methodological extension, iterative sure independence screening is also proposed to enhance its finite sample performance. With dimension reduced accurately from high to below sample size, variable selection can be improved on both speed and accuracy, and can then be accomplished by a well-developed method such as smoothly clipped absolute deviation, the Dantzig selector, lasso or adaptive lasso. The connections between these penalized least squares methods are also elucidated.
| Year | Citations | |
|---|---|---|
1995 | 105.5K | |
1996 | 50.3K | |
2001 | 27.3K | |
1999 | 26.9K | |
2006 | 20.5K | |
1997 | 19.8K | |
1999 | 11.6K | |
2003 | 9.9K | |
2004 | 9.4K | |
2001 | 9K |
Page 1
Page 1