Publication | Closed Access
Learning of Coordination: Exploiting Sparse Interactions in Multiagent Systems
72
Citations
15
References
2009
Year
Unknown Venue
Creating coordinated multiagent policies in environments with un-certainty is a challenging problem, which can be greatly simplified if the coordination needs are known to be limited to specific parts of the state space, as previous work has successfully shown. In this work, we assume that such needs are unknown and we investigate coordination learning in multiagent settings. We contribute a rein-forcement learning based algorithm in which independent decision-makers/agents learn both individual policies and when and how to coordinate. We focus on problems in which the interaction between the agents is sparse, exploiting this property to minimize the cou-pling of the learning processes for the different agents. We intro-duce a two-layer extension of Q-learning, in which we augment the action space of each agent with a coordination action that uses information from other agents to decide the correct action. Our results show that our agents learn both to act coordinate and to act independently, in the different regions of the space where they need to, and need not to, coordinate, respectively.
| Year | Citations | |
|---|---|---|
Page 1
Page 1