Sliced-Wasserstein normalizing flows: beyond maximum likelihood training - Signaux et Images Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Sliced-Wasserstein normalizing flows: beyond maximum likelihood training

Résumé

Despite their advantages, normalizing flows generally suffer from several shortcomings including their tendency to generate unrealistic data (e.g., images) and their failing to detect out-of-distribution data. One reason for these deficiencies lies in the training strategy which traditionally exploits a maximum likelihood principle only. This paper proposes a new training paradigm based on a hybrid objective function combining the maximum likelihood principle (MLE) and a sliced-Wasserstein distance. Results obtained on synthetic toy examples and real image data sets show better generative abilities in terms of both likelihood and visual aspects of the generated samples. Reciprocally, the proposed approach leads to a lower likelihood of out-of-distribution data, demonstrating a greater data fidelity of the resulting flows.
Fichier principal
Vignette du fichier
Sliced_Wasserstein_normalizing_flows__beyond_maximum_likelihood_training__ESANN_.pdf (261.76 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03720995 , version 1 (12-07-2022)

Licence

Paternité

Identifiants

Citer

Florentin Coeurdoux, Nicolas Dobigeon, Pierre Chainais. Sliced-Wasserstein normalizing flows: beyond maximum likelihood training. 30th European Symposium on Artificial Neural Networks (ESANN 2022), Oct 2022, Bruges, Belgium. ⟨10.48550/arXiv.2207.05468⟩. ⟨hal-03720995⟩
175 Consultations
54 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More