Publication | Open Access
Natural Questions: A Benchmark for Question Answering Research
1.9K
Citations
25
References
2019
Year
Natural Language ProcessingEngineeringInformation RetrievalQuestion AnsweringData ScienceAutomated ReasoningAggregated QueriesComputational LinguisticsNlp TaskNatural Language InterfaceNatural Questions CorpusRobust MetricsSemantic WebLanguage StudiesLinguisticsNatural QuestionsText MiningMachine Translation
Questions in the corpus are real anonymized queries issued to the Google search engine. The authors introduce the Natural Questions corpus, validate its quality through experiments, analyze human annotation variability, and establish robust evaluation metrics with baseline results. Annotators receive a question and a top‑5 Wikipedia page, marking long and short answers or null, and the released dataset contains 307,373 training, 7,830 development, and 7,842 test examples. Analysis of 25‑way annotations reveals significant human variability, and the authors demonstrate high human upper bounds on robust evaluation metrics while establishing competitive baseline results.
We present the Natural Questions corpus, a question answering data set. Questions consist of real anonymized, aggregated queries issued to the Google search engine. An annotator is presented with a question along with a Wikipedia page from the top 5 search results, and annotates a long answer (typically a paragraph) and a short answer (one or more entities) if present on the page, or marks null if no long/short answer is present. The public release consists of 307,373 training examples with single annotations; 7,830 examples with 5-way annotations for development data; and a further 7,842 examples with 5-way annotated sequestered as test data. We present experiments validating quality of the data. We also describe analysis of 25-way annotations on 302 examples, giving insights into human variability on the annotation task. We introduce robust metrics for the purposes of evaluating question answering systems; demonstrate high human upper bounds on these metrics; and establish baseline results using competitive methods drawn from related literature.
| Year | Citations | |
|---|---|---|
Page 1
Page 1