Publication | Closed Access
Predicting Visual Focus of Attention From Intention in Remote Collaborative Tasks
18
Citations
44
References
2008
Year
Artificial IntelligenceEngineeringHuman-machine InteractionVisual FocusSelective AttentionCognitionIntelligent SystemsCommunicationAttentionConversational ContentSocial SciencesInformation RetrievalAffective ComputingIntention RecognitionRobot LearningRemote Collaborative TasksHuman ComputationWeb-based CollaborationReal-time CollaborationCognitive ScienceAttention From IntentionTask PerformanceVisual SpaceVision ResearchComputer ScienceExperimental PsychologySocial CognitionVisual FunctionVisual ReasoningEye TrackingHuman-computer InteractionRemote CollaborationInteractive Computing
While shared visual space plays a very important role in remote collaboration on physical tasks, it is challenging and expensive to track users' focus of attention (FOA) during these tasks. In this paper, we propose to identify a user's FOA from his/her intention based on task properties, people's actions in the workspace, and conversational content. We employ a conditional Markov model to characterize a subject's FOA. We demonstrate the feasibility of the proposed method using a collaborative laboratory task in which one partner (the helper) instructs another (the worker) on how to assemble online puzzles. We model a helper's FOA using task properties, workers' actions, and conversational content. The accuracy of the model ranged from 65.40% for puzzles with easy-to-name pieces to 74.25% for puzzles with more difficult-to-name pieces. The proposed model can be used to predicate a user's FOA in a remote collaborative task without tracking the user's eye gaze.
| Year | Citations | |
|---|---|---|
Page 1
Page 1