Learning in games with quantized payoff observations - Laboratoire d'Informatique de Grenoble Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Learning in games with quantized payoff observations

Résumé

This paper investigates the impact of feedback quantization on multi-agent learning. In particular, we analyze the equilibrium convergence properties of the well-known "follow the regularized leader" (FTRL) class of algorithms when players can only observe a quantized (and possibly noisy) version of their payoffs. In this information-constrained setting, we show that coarser quantization triggers a qualitative shift in the convergence behavior of FTRL schemes. Specifically, if the quantization error lies below a threshold value (which depends only on the underlying game and not on the level of uncertainty entering the process or the specific FTRL variant under study), then (i) FTRL is attracted to the game's strict Nash equilibria with arbitrarily high probability; and (ii) the algorithm's asymptotic rate of convergence remains the same as in the non-quantized case. Otherwise, for larger quantization levels, these convergence properties are lost altogether: players may fail to learn anything beyond their initial state, even with full information on their payoff vectors. This is in contrast to the impact of quantization in continuous optimization problems, where the quality of the obtained solution degrades smoothly with the quantization level.
Fichier principal
Vignette du fichier
QuantizedLearning-CDC.pdf (284.64 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03874022 , version 1 (27-11-2022)

Identifiants

Citer

Kyriakos Lotidis, Panayotis Mertikopoulos, Nicholas Bambos. Learning in games with quantized payoff observations. CDC 2022 - 61st IEEE Annual Conference on Decision and Control, Dec 2022, Cancun, Mexico. pp.1-8. ⟨hal-03874022⟩
36 Consultations
23 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More