Publication | Open Access
Better word alignments with supervised ITG models
101
Citations
30
References
2009
Year
Unknown Venue
Structured PredictionSyntactic ParsingEngineeringMachine LearningMultilingual PretrainingSimple RelaxationsCorpus LinguisticsText MiningNatural Language ProcessingSyntaxComputational LinguisticsGrammarLanguage StudiesMachine TranslationGiza++ AlignmentsNlp TaskSemantic ParsingNeural Machine TranslationInversion Transduction GrammarSupervised Itg ModelsLinguistics
This work investigates supervised word alignment methods that exploit inversion transduction grammar (ITG) constraints. We consider maximum margin and conditional likelihood objectives, including the presentation of a new normal form grammar for canonicalizing derivations. Even for non-ITG sentence pairs, we show that it is possible learn ITG alignment models by simple relaxations of structured discriminative learning objectives. For efficiency, we describe a set of pruning techniques that together allow us to align sentences two orders of magnitude faster than naive bitext CKY parsing. Finally, we introduce many-to-one block alignment features, which significantly improve our ITG models. Altogether, our method results in the best reported AER numbers for Chinese-English and a performance improvement of 1.1 BLEU over GIZA++ alignments.
| Year | Citations | |
|---|---|---|
Page 1
Page 1