Publication | Closed Access
Maximum Entropy Markov Models for Information Extraction and Segmentation
1.3K
Citations
18
References
2000
Year
Unknown Venue
Hidden Markov models are powerful probabilistic tools for modeling sequential data and have been successfully applied to tasks such as part‑of‑speech tagging, text segmentation, and information extraction, typically modeling observations as multinomial distributions over a discrete vocabulary and optimizing parameters to maximize likelihood. This study introduces a new Markovian sequence model that extends HMMs by allowing observations to be represented as arbitrary overlapping features and defines the conditional probability of state sequences given observation sequences. The model employs a maximum‑entropy framework to fit exponential models that estimate the probability of a state conditioned on the current observation and the preceding state. Experiments on FAQ segmentation demonstrate positive results for the proposed model.
Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many text-related tasks, such as part-of-speech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial distributions over a discrete vocabulary, and the HMM parameters are set to maximize the likelihood of the observations. This paper presents a new Markovian sequence model, closely related to HMMs, that allows observations to be represented as arbitrary overlapping features (such as word, capitalization, formatting, part-of-speech), and defines the conditional probability of state sequences given observation sequences. It does this by using the maximum entropy framework to fit a set of exponential models that represent the probability of a state given an observation and the previous state. We present positive experimental results on the segmentation of FAQ’s.
| Year | Citations | |
|---|---|---|
Page 1
Page 1