Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Multiagent Reinforcement Social Learning toward Coordination in Cooperative Multiagent Systems

Multiagent Reinforcement Social Learning toward Coordination in Cooperative Multiagent Systems Multiagent Reinforcement Social Learning toward Coordination in Cooperative Multiagent Systems JIANYE HAO, Massachusetts Institute of Technology Shenzhen University HO-FUNG LEUNG, The Chinese University of Hong Kong ZHONG MING, Shenzhen University Most previous works on coordination in cooperative multiagent systems study the problem of how two (or more) players can coordinate on Pareto-optimal Nash equilibrium(s) through fixed and repeated interactions in the context of cooperative games. However, in practical complex environments, the interactions between agents can be sparse, and each agent's interacting partners may change frequently and randomly. To this end, we investigate the multiagent coordination problems in cooperative environments under a social learning framework. We consider a large population of agents where each agent interacts with another agent randomly chosen from the population in each round. Each agent learns its policy through repeated interactions with the rest of the agents via social learning. It is not clear a priori if all agents can learn a consistent optimal coordination policy in such a situation. We distinguish two different types of learners depending on the amount of information each agent can perceive: individual action learner and joint action learner. The learning performance of both types of learners is evaluated under http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Autonomous and Adaptive Systems (TAAS) Association for Computing Machinery

Multiagent Reinforcement Social Learning toward Coordination in Cooperative Multiagent Systems

Loading next page...
 
/lp/association-for-computing-machinery/multiagent-reinforcement-social-learning-toward-coordination-in-U0QderRjdO
Publisher
Association for Computing Machinery
Copyright
Copyright © 2014 by ACM Inc.
ISSN
1556-4665
DOI
10.1145/2644819
Publisher site
See Article on Publisher Site

Abstract

Multiagent Reinforcement Social Learning toward Coordination in Cooperative Multiagent Systems JIANYE HAO, Massachusetts Institute of Technology Shenzhen University HO-FUNG LEUNG, The Chinese University of Hong Kong ZHONG MING, Shenzhen University Most previous works on coordination in cooperative multiagent systems study the problem of how two (or more) players can coordinate on Pareto-optimal Nash equilibrium(s) through fixed and repeated interactions in the context of cooperative games. However, in practical complex environments, the interactions between agents can be sparse, and each agent's interacting partners may change frequently and randomly. To this end, we investigate the multiagent coordination problems in cooperative environments under a social learning framework. We consider a large population of agents where each agent interacts with another agent randomly chosen from the population in each round. Each agent learns its policy through repeated interactions with the rest of the agents via social learning. It is not clear a priori if all agents can learn a consistent optimal coordination policy in such a situation. We distinguish two different types of learners depending on the amount of information each agent can perceive: individual action learner and joint action learner. The learning performance of both types of learners is evaluated under

Journal

ACM Transactions on Autonomous and Adaptive Systems (TAAS)Association for Computing Machinery

Published: Dec 19, 2014

References