Concepedia

TLDR

Building intelligent conversational agents has led to end‑to‑end models trained on large real‑world dialog corpora, yet their success and limitations remain unclear, and synthetic bAbI tasks, while probing reasoning, are too small to assess scalability. The authors propose a larger‑scale task suite to bridge real‑world and synthetic dialog evaluation. They construct a movie‑based dataset of 75 k entities and 3.5 M training examples, with tasks that require factual answering via OMDB, personalization via MovieLens, brief conversations, and Reddit‑style dialogs. Results of multiple models on the tasks are presented and evaluated.

Abstract

A long-term goal of machine learning is to build intelligent conversational agents. One recent popular approach is to train end-to-end models on a large amount of real dialog transcripts between humans (Sordoni et al., 2015; Vinyals & Le, 2015; Shang et al., 2015). However, this approach leaves many questions unanswered as an understanding of the precise successes and shortcomings of each model is hard to assess. A contrasting recent proposal are the bAbI tasks (Weston et al., 2015b) which are synthetic data that measure the ability of learning machines at various reasoning tasks over toy language. Unfortunately, those tests are very small and hence may encourage methods that do not scale. In this work, we propose a suite of new tasks of a much larger scale that attempt to bridge the gap between the two regimes. Choosing the domain of movies, we provide tasks that test the ability of models to answer factual questions (utilizing OMDB), provide personalization (utilizing MovieLens), carry short conversations about the two, and finally to perform on natural dialogs from Reddit. We provide a dataset covering 75k movie entities and with 3.5M training examples. We present results of various models on these tasks, and evaluate their performance.

References

YearCitations

Page 1