Publication | Open Access
Finding structure in time
2.9K
Citations
0
References
1990
Year
Semantic ProcessingPsycholinguisticsMemory DemandsRecurrent Neural NetworkSocial SciencesNatural Language ProcessingTemporal DynamicConnectionismMemoryLanguage StudiesHuman LearningCognitive ScienceSemantic InterpretationKnowledge DiscoveryTemporal Pattern RecognitionDynamic MemoryInteresting Internal RepresentationsTemporal ComplexityStructure DiscoveryLinguistics
Time underlies many human behaviors, and one approach to modeling it is to represent time implicitly through its effects on processing rather than explicitly as a spatial dimension. The study aims to address the important question of how to represent time in connectionist models by developing a proposal using recurrent links to give networks dynamic memory. The method uses recurrent feedback of hidden unit patterns so that internal representations reflect task demands and prior states, and simulations ranging from a temporal XOR to syntactic/semantic feature discovery illustrate the approach. The networks learn internal representations that bind memory and task demands, revealing rich, context‑dependent structures that generalize across item classes and suggest a way to represent lexical categories and the type/token distinction.
Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves: the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands: indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent, while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type/token distinction.