Concepedia

Publication | Closed Access

Semi-supervised Discriminant Analysis

760

Citations

25

References

2007

Year

TLDR

Linear Discriminant Analysis (LDA) is widely used to extract features that preserve class separability by maximizing between‑class covariance while minimizing within‑class covariance, but limited training samples can lead to inaccurate class covariance estimates. This study introduces Semi‑supervised Discriminant Analysis (SDA), aiming to learn a smooth discriminant function on the data manifold by leveraging both labeled and unlabeled samples. SDA maximizes class separability using labeled data while employing unlabeled data to capture the intrinsic geometric structure of the data. Experiments on single‑training‑image face recognition and relevance‑feedback image retrieval confirm SDA’s effectiveness.

Abstract

Linear Discriminant Analysis (LDA) has been a popular method for extracting features which preserve class separability. The projection vectors are commonly obtained by maximizing the between class covariance and simultaneously minimizing the within class covariance. In practice, when there is no sufficient training samples, the covariance matrix of each class may not be accurately estimated. In this paper, we propose a novel method, called Semi- supervised Discriminant Analysis (SDA), which makes use of both labeled and unlabeled samples. The labeled data points are used to maximize the separability between different classes and the unlabeled data points are used to estimate the intrinsic geometric structure of the data. Specifically, we aim to learn a discriminant function which is as smooth as possible on the data manifold. Experimental results on single training image face recognition and relevance feedback image retrieval demonstrate the effectiveness of our algorithm.

References

YearCitations

Page 1