Abstract
Learning in games has been widely used to solve many cooperative multi-agent problems such as coverage control, consensus, self-reconfiguration or vehicle-target assignment. One standard approach in this domain is to formulate the problem as a potential game and to use an algorithm such as log-linear learning to achieve the stochastic stability of globally optimal configurations. Standard versions of such learning algorithms are asynchronous, i.e., only one agent updates its action at each round of the learning process. To enable faster learning, we propose a synchronization strategy based on decentralized random prioritization of agents, which allows multiple agents to change their actions simultaneously when they do not affect each other's utility or feasible actions. We show that the proposed approach can be integrated into any standard asynchronous learning algorithm to improve the convergence speed while maintaining the limiting behavior (e.g., stochastically stable configurations). We support our theoretical results with simulations in a coverage control scenario.
Original language | English (US) |
---|---|
Title of host publication | 2022 IEEE 61st Conference on Decision and Control, CDC 2022 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 2500-2505 |
Number of pages | 6 |
ISBN (Electronic) | 9781665467612 |
DOIs | |
State | Published - 2022 |
Event | 61st IEEE Conference on Decision and Control, CDC 2022 - Cancun, Mexico Duration: Dec 6 2022 → Dec 9 2022 |
Publication series
Name | Proceedings of the IEEE Conference on Decision and Control |
---|---|
Volume | 2022-December |
ISSN (Print) | 0743-1546 |
ISSN (Electronic) | 2576-2370 |
Conference
Conference | 61st IEEE Conference on Decision and Control, CDC 2022 |
---|---|
Country/Territory | Mexico |
City | Cancun |
Period | 12/6/22 → 12/9/22 |
Bibliographical note
Publisher Copyright:© 2022 IEEE.