EUROPT 2025
Abstract Submission

600. From min-max to harmonic games: Regret, learning, and the role of optimism

Invited abstract in session MB-10: Optimization, Learning, and Games I, stream Optimization, Learning, and Games.

Monday, 10:30-12:30
Room: B100/8011

Authors (first author is the speaker)

1. Panayotis Mertikopoulos
Laboratoire d'Informatique de Grenoble, French National Center for Scientific Research (CNRS)

Abstract

Min-max games are often thought of as the strategic counterpart to potential / common interest games, but this view is overly simplistic - there are simple examples of two-player, zero-sum games that are potential, so this analogy is hardly apt. In this talk, we will discuss the class of harmonic games, originally due to Candogan et al. (2011), which can be shown to be the orthogonal complement of potential games, and thus provide a more principled counterpart thereof.

The main focus of the talk will be the long-run behavior of regularized learning algorithms - like mirror descent - in harmonic games. In a continuous-time setting, we show that the learning dynamics are Poincaré recurrent, i.e., they return arbitrarily close to their starting point infinitely often, so they fail to converge. In discrete time, the standard, "vanilla" implementation of the method may lead to even worse outcomes, eventually trapping the players in a perpetual cycle of best-responses. However, by adding a suitable extrapolation step – which includes as special cases the optimistic and mirror-prox variants of the algorithm – we show that learning converges to a Nash equilibrium from any initial condition, and all players are guaranteed at most O(1) regret. In this regard, harmonic games comprise the canonical complement of potential games not only from a strategic, but also from a dynamical viewpoint.

Keywords

Status: accepted


Back to the list of papers