Phase Collapse in Neural Networks - PaRis AI Research InstitutE Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2021

Phase Collapse in Neural Networks

Résumé

Deep convolutional image classifiers progressively transform the spatial variability into a smaller number of channels, which linearly separates all classes. A fundamental challenge is to understand the role of rectifiers together with convolutional filters in this transformation. Rectifiers with biases are often interpreted as thresholding operators which improve sparsity and discrimination. This paper demonstrates that it is a different phase collapse mechanism which explains the ability to progressively eliminate spatial variability, while improving linear class separation. This is explained and shown numerically by defining a simplified complex-valued convolutional network architecture. It implements spatial convolutions with wavelet filters and uses a complex modulus to collapse phase variables. This phase collapse network reaches the classification accuracy of ResNets of similar depths, whereas its performance is considerably degraded when replacing the phase collapse with thresholding operators. This is justified by explaining how iterated phase collapses progressively improve separation of class means, as opposed to thresholding non-linearities.
Fichier principal
Vignette du fichier
ICLR2022_preprint.pdf (266.74 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03373703 , version 1 (13-10-2021)
hal-03373703 , version 2 (24-03-2022)

Identifiants

  • HAL Id : hal-03373703 , version 1

Citer

Florentin Guth, John Zarka, Stephane Mallat. Phase Collapse in Neural Networks. 2021. ⟨hal-03373703v1⟩
131 Consultations
99 Téléchargements

Partager

Gmail Facebook X LinkedIn More