Abstract
Multi-agent reinforcement learning (MARL) has attracted much research attention recently. However, unlike its single-agent counterpart, many theoretical and algorithmic aspects of MARL have not been well-understood. In this paper, we study the emergence of coordinated behavior by autonomous agents using an actor-critic (AC) algorithm. Specifically, we propose and analyze a class of coordinated actor-critic (CAC) algorithms in which individually parametrized policies have a shared part (which is jointly optimized among all agents) and a personalized part (which is only locally optimized). Such a kind of partially personalized policy allows agents to coordinate by leveraging peers’ experience and adapt to individual tasks. The flexibility in our design allows the proposed CAC algorithm to be used in a fully decentralized setting, where the agents can only communicate with their neighbors, as well as in a federated setting, where the agents occasionally communicate with a server while optimizing their (partially personalized) local models. Theoretically, we show that under some standard regularity assumptions, the proposed CAC algorithm requires O(ε-5 2) samples to achieve an ε-stationary solution (defined as the solution whose squared norm of the gradient of the objective function is less than ε). To the best of our knowledge, this work provides the first finite-sample guarantee for decentralized AC algorithm with partially personalized policies.
Original language | English (US) |
---|---|
Pages (from-to) | 278-290 |
Number of pages | 13 |
Journal | Proceedings of Machine Learning Research |
Volume | 168 |
State | Published - 2022 |
Event | 4th Annual Learning for Dynamics and Control Conference, L4DC 2022 - Stanford, United States Duration: Jun 23 2022 → Jun 24 2022 |
Bibliographical note
Publisher Copyright:© 2022 S. Zeng, T. Chen, A. Garcia & M. Hong.
Keywords
- Actor-Critic
- Multi-Agent Reinforcement Learning
- Parameter Sharing