Publication | Closed Access
Attend and Interact: Higher-Order Object Interactions for Video Understanding
145
Citations
49
References
2018
Year
Unknown Venue
EngineeringMachine LearningArbitrary SubgroupsVideo RetrievalVideo InterpretationHuman-object InteractionNatural Language ProcessingImage AnalysisData SciencePattern RecognitionVideo TransformerHuman ActionsMachine VisionVision Language ModelComputer ScienceVideo UnderstandingDeep LearningComplex InteractionsComputer VisionScene Interpretation
Human actions involve complex interactions among multiple objects, yet current fine‑grained video methods rely on single‑object or pairwise relationships, making multi‑object modeling across many frames computationally infeasible and performance‑detrimental due to combinatorial explosion. This work proposes to efficiently learn higher‑order interactions between arbitrary subgroups of objects for fine‑grained video understanding. The method is validated on the large‑scale Kinetics and ActivityNet Captions datasets. Higher‑order object interactions achieve state‑of‑the‑art accuracy for action recognition and video captioning on both datasets, improve performance with more than three‑fold computational savings, and constitute the first open‑domain large‑scale video study to model such interactions.
Human actions often involve complex interactions across several inter-related objects in the scene. However, existing approaches to fine-grained video understanding or visual relationship detection often rely on single object representation or pairwise object relationships. Furthermore, learning interactions across multiple objects in hundreds of frames for video is computationally infeasible and performance may suffer since a large combinatorial space has to be modeled. In this paper, we propose to efficiently learn higher-order interactions between arbitrary subgroups of objects for fine-grained video understanding. We demonstrate that modeling object interactions significantly improves accuracy for both action recognition and video captioning, while saving more than 3-times the computation over traditional pairwise relationships. The proposed method is validated on two large-scale datasets: Kinetics and ActivityNet Captions. Our SINet and SINet-Caption achieve state-of-the-art performances on both datasets even though the videos are sampled at a maximum of 1 FPS. To the best of our knowledge, this is the first work modeling object interactions on open domain large-scale video datasets, and we additionally model higher-order object interactions which improves the performance with low computational costs.
| Year | Citations | |
|---|---|---|
Page 1
Page 1