Multiagent Online Learning in Potential Games and Beyond

Our series of previous work during 2014-2018 positively answers one of the questions in multiagent online learning: in “which classes of games” would “which kinds of no-regret learning algorithms” strengthen the convergence results? Our previous results confirmed that a large class of learning algorithms can make the joint strategy profile converge to approximate Wardrop/Nash equilibria, a stricter class than the class of coarse correlated equilibria, and furthermore bounded the price of anarchy. The design of algorithms (particularly, the part for the partial information model) and the analysis techniques for convergences can be good references for follow-up work in this type of multiagent online learning problem that combines machine learning and game theory. Extending research in this direction is one of the research problems in this proposal: it is about generalizing or expanding the classes of specific no-regret learning algorithms that can be used in more general or different classes of games for players to reach equilibria. The other research direction is to pursue a group objective function, instead of each participant using certain no-regret learning algorithms to achieve system equilibria, concerned with cumulative losses as individual cost functions, imagine that our multiagent system is a team that needs to optimize a system objective together, for example, the social cost optimization. But each participant after taking action in every time step can only receive individual partial information regarding her/his own action. “Equilibrium” and “system objective optimum” is different. We first consider congestion games. Participants now all care for a common system objective, not an individual cost anymore: we will need to design distributed learning algorithms, different from those leading the system to equilibria, to make the system converge to a system objective optimum, and can only do this using partial feedback information under the bandit model.

Leave a Comment

Your email address will not be published. Required fields are marked *