Publication | Closed Access
Learning similarity measures in non-orthogonal space
39
Citations
14
References
2004
Year
Unknown Venue
EngineeringMachine LearningSimilarity MeasureMany Machine LearningCorpus LinguisticsText MiningNatural Language ProcessingSimilarity MeasuresInformation RetrievalData ScienceData MiningPattern RecognitionDocument ClusteringCosine SimilarityKnowledge DiscoveryComputer ScienceImage SimilarityVector Space ModelSimilarity MetricsSimilarity SearchSemantic Similarity
Many machine learning and data mining algorithms crucially rely on the similarity metrics. The Cosine similarity, which calculates the inner product of two normalized feature vectors, is one of the most commonly used similarity measures. However, in many practical tasks such as text categorization and document clustering, the Cosine similarity is calculated under the assumption that the input space is an orthogonal space which usually could not be satisfied due to synonymy and polysemy. Various algorithms such as Latent Semantic Indexing (LSI) were used to solve this problem by projecting the original data into an orthogonal space. However LSI also suffered from the high computational cost and data sparseness. These shortcomings led to increases in computation time and storage requirements for large scale realistic data. In this paper, we propose a novel and effective similarity metric in the non-orthogonal input space. The basic idea of our proposed metric is that the similarity of features should affect the similarity of objects, and vice versa. A novel iterative algorithm for computing non-orthogonal space similarity measures is then proposed. Experimental results on a synthetic data set, a real MSN search click-thru logs, and 20NG dataset show that our algorithm outperforms the traditional Cosine similarity and is superior to LSI.
| Year | Citations | |
|---|---|---|
Page 1
Page 1