Publication | Open Access
BERTScore: Evaluating Text Generation with BERT
603
Citations
64
References
2020
Year
Natural Language ProcessingRetrieval Augmented GenerationCommon MetricsToken SimilarityText GenerationMachine LearningEngineeringLanguage GenerationComputational LinguisticsNlp TaskLanguage StudiesLinguisticsText MiningMachine TranslationWord Embeddings
BERTScore computes a token‑wise similarity score between candidate and reference sentences, analogous to common metrics. The paper proposes BERTScore, an automatic evaluation metric for text generation. BERTScore uses contextual embeddings instead of exact matches to compute token similarity and is evaluated on outputs from 363 machine translation and image captioning systems. BERTScore correlates better with human judgments, improves model selection, and is more robust to challenging examples than existing metrics.
We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task and show that BERTScore is more robust to challenging examples compared to existing metrics.
| Year | Citations | |
|---|---|---|
Page 1
Page 1