Publication | Closed Access
Integrating vision and audition within a cognitive architecture to track conversations
45
Citations
36
References
2008
Year
Unknown Venue
Artificial IntelligenceEngineeringComputational Cognitive ArchitectureCognitionCognitive RoboticsIntelligent SystemsCommunicationEmbodied AgentSpeech RecognitionCognitive ArchitectureConversational Tracking SystemConversation AnalysisRobot LearningEmbodied RoboticsHealth SciencesCognitive ScienceDialogue ManagementHuman Agent InteractionHuman-robot InteractionSpeech CommunicationSpeech TechnologyDevelopmental RoboticsEye TrackingSpeech ProcessingHuman-computer InteractionMovement ModulesSpeech PerceptionRoboticsSpeech InterfaceVoice Interaction
We describe a computational cognitive architecture for robots which we call ACT-R/E (ACT-R/Embodied). ACT-R/E is based on ACT-R [1, 2] but uses different visual, auditory, and movement modules. We describe a model that uses ACT-R/E to integrate visual and auditory information to perform conversation tracking in a dynamic environment. We also performed an empirical evaluation study which shows that people see our conversational tracking system as extremely natural.
| Year | Citations | |
|---|---|---|
Page 1
Page 1