Publication | Closed Access
*SEM 2013 shared task: Semantic Textual Similarity
385
Citations
8
References
2013
Year
EngineeringSemantic WebSemanticsSemantic SimilarityCorpus LinguisticsJournalismText MiningNatural Language ProcessingApplied LinguisticsInformation RetrievalComputational LinguisticsLanguage StudiesMachine TranslationEntity DisambiguationNlp TaskSemantic Textual SimilarityDistributional SemanticsSemantic EquivalenceCore TaskLinguisticsWord-sense DisambiguationSemantic Representation
In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.
| Year | Citations | |
|---|---|---|
Page 1
Page 1