Publication | Open Access
Automatic Summarization of Open-Domain Multiparty Dialogues in Diverse Genres
128
Citations
50
References
2002
Year
EngineeringEntity SummarizationSpoken Dialog SystemCommunicationCorpus LinguisticsAutomatic SummarizationText MiningSpeech RecognitionNatural Language ProcessingSummarization SystemComputational LinguisticsConversation AnalysisMachine TranslationNlp TaskAutomatic-extract SummariesMulti-modal SummarizationSpeech SummarizationArtsLinguistics
Automatic summarization of open‑domain spoken dialogues is a relatively new research area. This work introduces the task of summarizing multiparty dialogues across four genres and proposes an approach for generating automatic extractive summaries without domain restrictions. The approach tackles speech disfluency removal, sentence boundary detection, and cross‑speaker linkage, and is evaluated on a manually annotated corpus of 23 ten‑minute dialogues comprising 80 topical segments and 47,000 words. In informal genres, the system with dialogue‑specific components significantly outperforms both a TF*IDF‑based MMR ranking baseline and a LEAD baseline that extracts the first n words.
Automatic summarization of open-domain spoken dialogues is a relatively new research area. This article introduces the task and the challenges involved and motivates and presents an approach for obtaining automatic-extract summaries for human transcripts of multiparty dialogues of four different genres, without any restriction on domain. We address the following issues, which are intrinsic to spoken-dialogue summarization and typically can be ignored when summarizing written text such as news wire data: (1) detection and removal of speech disfluencies; (2) detection and insertion of sentence boundaries; and (3) detection and linking of cross-speaker information units (question-answer pairs). A system evaluation is performed using a corpus of 23 dialogue excerpts with an average duration of about 10 minutes, comprising 80 topical segments and about 47,000 words total. The corpus was manually annotated for relevant text spans by six human annotators. The global evaluation shows that for the two more informal genres, our summarization system using dialogue-specific components significantly outperforms two baselines: (1) a maximum-marginal-relevance ranking algorithm using TF*IDF term weighting, and (2) a LEAD baseline that extracts the first n words from a text.
| Year | Citations | |
|---|---|---|
Page 1
Page 1