Publication | Open Access
Best practices for the human evaluation of automatically generated text
191
Citations
72
References
2019
Year
Unknown Venue
Currently, there is little agreement as to how Natural Language Generation (NLG) systems should be evaluated, with a particularly high degree of variation in the way that human evaluation is carried out. This paper provides an overview of how human evaluation is currently conducted, and presents a set of best practices, grounded in the literature. With this paper, we hope to contribute to the quality and consistency of human evaluations in NLG.
| Year | Citations | |
|---|---|---|
Page 1
Page 1