Publication | Open Access
Training connectionist models for the structured language model
22
Citations
12
References
2003
Year
Unknown Venue
Structured PredictionSyntactic ParsingEngineeringMultilingual PretrainingLarge Language ModelCorpus LinguisticsText MiningNatural Language ProcessingSyntaxData ScienceComputational LinguisticsGrammarLanguage StudiesLanguage ModelsMachine TranslationConnectionist ModelsConnectionist ModelSemantic ParsingTreebanksStructured Language ModelLinguistics
We investigate the performance of the Structured Language Model (SLM) in terms of perplexity (PPL) when its components are modeled by connectionist models. The connectionist models use a distributed representation of the items in the history and make much better use of contexts than currently used interpolated or back-off models, not only because of the inherent capability of the connectionist model in fighting the data sparseness problem, but also because of the sublinear growth in the model size when the context length is increased. The connectionist models can be further trained by an EM procedure, similar to the previously used procedure for training the SLM. Our experiments show that the connectionist models can significantly improve the PPL over the interpolated and back-off models on the UPENN Treebank corpora, after interpolating with a baseline trigram language model. The EM training procedure can improve the connectionist models further, by using hidden events obtained by the SLM parser.
| Year | Citations | |
|---|---|---|
Page 1
Page 1