Publication | Open Access
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space
82
Citations
23
References
2022
Year
Unknown Venue
Structured PredictionLlm Fine-tuningEngineeringMachine LearningLarge Language ModelCorpus LinguisticsText MiningNatural Language ProcessingComputational LinguisticsLanguage EngineeringVocabulary SpaceTransformer-based Language ModelsLanguage StudiesLanguage ModelsMachine TranslationLarge Ai ModelFfn LayerNlp TaskComputer ScienceDeep LearningFfn UpdatesLinguistics
Transformer-based language models (LMs) are at the core of modern NLP, but their internal prediction construction process is opaque and largely not understood. In this work, we make a substantial step towards unveiling this underlying prediction process, by reverse-engineering the operation of the feed-forward network (FFN) layers, one of the building blocks of transformer models. We view the token representation as a changing distribution over the vocabulary, and the output from each FFN layer as an additive update to that distribution. Then, we analyze the FFN updates in the vocabulary space, showing that each update can be decomposed to sub-updates corresponding to single FFN parameter vectors, each promoting concepts that are often human-interpretable. We then leverage these findings for controlling LM predictions, where we reduce the toxicity of GPT2 by almost 50%, and for improving computation efficiency with a simple early exit rule, saving 20% of computation on average.
| Year | Citations | |
|---|---|---|
Page 1
Page 1