Diverse Weight Averaging for Out-of-Distribution Generalization - Archive ouverte HAL Access content directly
Conference Papers Year :

Diverse Weight Averaging for Out-of-Distribution Generalization

(1) , (1, 2) , (2) , (2, 3) , (1, 2) , (1, 4)


Standard neural networks struggle to generalize under distribution shifts. For out-of-distribution generalization in computer vision, the best current approach averages the weights along a training run. In this paper, we propose Diverse Weight Averaging (DiWA) that makes a simple change to this strategy: DiWA averages the weights obtained from several independent training runs rather than from a single run. Perhaps surprisingly, averaging these weights performs well under soft constraints despite the network's nonlinearities. The main motivation behind DiWA is to increase the functional diversity across averaged models. Indeed, models obtained from different runs are more diverse than those collected along a single run thanks to differences in hyperparameters and training procedures. We motivate the need for diversity by a new bias-variance-covariance-locality decomposition of the expected error, exploiting similarities between DiWA and standard functional ensembling. Moreover, this decomposition highlights that DiWA succeeds when the variance term dominates, which we show happens when the marginal distribution changes at test time. Experimentally, DiWA consistently improves the state of the art on the competitive DomainBed benchmark without inference overhead.

Dates and versions

hal-03891267 , version 1 (09-12-2022)



Alexandre Rame, Matthieu Kirchmeyer, Thibaud Rahier, Alain Rakotomamonjy, Patrick Gallinari, et al.. Diverse Weight Averaging for Out-of-Distribution Generalization. Conference on Neural Information Processing Systems, Nov 2022, New-Orleans, United States. ⟨hal-03891267⟩
0 View
0 Download



Gmail Facebook Twitter LinkedIn More