Title | : | Cooperative Multi-Agent Constrained POMDPs: Strong Duality and Primal-Dual Reinforcement Learning with Approximate Information States |
Speaker | : | Vijay Subramanian (University of Michigan, Ann Arbor) |
Details | : | Wed, 3 Jan, 2024 2:30 PM @ ESB-244 |
Abstract: | : | We study the problem of decentralized constrained POMDPs in a team-setting where the multiple non-strategic agents have asymmetric information. Strong duality is established for the setting of infinite-horizon expected total discounted costs when the observations lie in a countable space, the actions are chosen from a finite space, and the immediate cost functions are bounded. Following this, connections with the common-information and approximate information-state approaches are established. The approximate information-states are characterized independent of the Lagrange-multipliers vector (under certain assumptions) so that adaptations of the multiplier (during learning) will not necessitate new representations. Finally, a primal-dual multi-agent reinforcement learning (MARL) framework based on centralized training distributed execution (CTDE) and three time-scale stochastic approximation is developed with the aid of recurrent and feedforward neural-networks for function-approximation. As a part of this talk, some broader context on decentralized teams will also be provided. This is joint work with Nouman Khan at the University of Michigan, Ann Arbor (appeared in part in the proceedings of IEEE CDC 2023), and Hsu Kao when he was a Ph.D. student at the University of Michigan, Ann Arbor (appeared in the proceedings of AISTATS 2022). |