Concepedia

Publication | Open Access

A study of machine-learning-based approaches to extract clinical entities and their assertions from discharge summaries

296

Citations

44

References

2011

Year

TLDR

The study was part of the 2010 Center of Informatics for Integrating Biology and the Bedside/Veterans Affairs natural‑language‑processing challenge. The authors aimed to develop and evaluate machine‑learning methods for extracting clinical entities and their assertions from hospital discharge summaries. They built a machine‑learning named‑entity recognition system, tested various features and algorithms on 349 annotated notes, then applied a hybrid rule‑based/ML system to concept extraction and assertion classification on 477 test notes, measuring precision, recall, and F‑measure. Conditional Random Fields outperformed Support Vector Machines, and the hybrid system achieved an overall F‑score of 0.8391 for concept extraction (second place) and 0.9313 for assertion classification (fourth place but not statistically different from the top three).

Abstract

Objective The authors' goal was to develop and evaluate machine-learning-based approaches to extracting clinical entities—including medical problems, tests, and treatments, as well as their asserted status—from hospital discharge summaries written using natural language. This project was part of the 2010 Center of Informatics for Integrating Biology and the Bedside/Veterans Affairs (VA) natural-language-processing challenge. Design The authors implemented a machine-learning-based named entity recognition system for clinical text and systematically evaluated the contributions of different types of features and ML algorithms, using a training corpus of 349 annotated notes. Based on the results from training data, the authors developed a novel hybrid clinical entity extraction system, which integrated heuristic rule-based modules with the ML-base named entity recognition module. The authors applied the hybrid system to the concept extraction and assertion classification tasks in the challenge and evaluated its performance using a test data set with 477 annotated notes. Measurements Standard measures including precision, recall, and F-measure were calculated using the evaluation script provided by the Center of Informatics for Integrating Biology and the Bedside/VA challenge organizers. The overall performance for all three types of clinical entities and all six types of assertions across 477 annotated notes were considered as the primary metric in the challenge. Results and discussion Systematic evaluation on the training set showed that Conditional Random Fields outperformed Support Vector Machines, and semantic information from existing natural-language-processing systems largely improved performance, although contributions from different types of features varied. The authors' hybrid entity extraction system achieved a maximum overall F-score of 0.8391 for concept extraction (ranked second) and 0.9313 for assertion classification (ranked fourth, but not statistically different than the first three systems) on the test data set in the challenge.

References

YearCitations

Page 1