Energy Efficient Learning With Low Resolution Stochastic Domain Wall Synapse for Deep Neural Networks - CNRS - Centre national de la recherche scientifique Accéder directement au contenu
Article Dans Une Revue IEEE Access Année : 2022

Energy Efficient Learning With Low Resolution Stochastic Domain Wall Synapse for Deep Neural Networks

Walid Al Misba
D. Querlioz
Jayasimha Atulasimha

Résumé

We demonstrate extremely low resolution quantized (nominally 5-state) synapses with large stochastic variations in synaptic weights can be energy efficient and achieve reasonably high testing accuracies compared to Deep Neural Networks (DNNs) of similar sizes using floating-point precision synaptic weights. Specifically, voltage-controlled domain wall (DW) devices demonstrate stochastic behavior and can only encode limited states; however, they are extremely energy efficient during both training and inference. In this study, we propose both in-situ and ex-situ training algorithms, based on modification of the algorithm proposed by Hubara et al., 2017 which works well with quantization of synaptic weights, and train several 5-layer DNNs on MNIST dataset using 2-, 3-and 5-state DW devices as a synapse. For insitu training, a separate high precision memory unit preserves and accumulates the weight gradients which prevents accuracy loss due to weight quantization. For ex-situ training, a precursor DNN is first trained based on weight quantization and DW device model. Moreover, a noise tolerance margin is included in both of the training methods to account for the intrinsic device noise. The highest inference accuracies we obtain after in-situ and ex-situ training are ∼ 96.67% and ∼96.63%, respectively, which is very close to the baseline accuracy of ∼97.1% obtained from a similar topology DNN having floating-point precision weights with no stochasticity. Large interstate intervals due to quantized weights and noise tolerance margin enables in-situ training with significantly lower number of programming attempts. Our proposed approach demonstrates a possibility of at least two orders of magnitude energy savings compared to the floating-point approach implemented in CMOS. This approach is specifically attractive for low power intelligent edge devices where the ex-situ learning can be utilized for energy efficient non-adaptive tasks and the in-situ learning can provide the opportunity to adapt and learn in a dynamically evolving environment.
Fichier principal
Vignette du fichier
Energy_Efficient_Learning_With_Low_Resolution_Stochastic_Domain_Wall_Synapse_for_Deep_Neural_Networks.pdf (1.45 Mo) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03861118 , version 1 (19-11-2022)

Identifiants

Citer

Walid Al Misba, Mark Lozano, D. Querlioz, Jayasimha Atulasimha. Energy Efficient Learning With Low Resolution Stochastic Domain Wall Synapse for Deep Neural Networks. IEEE Access, 2022, 10, pp.84946 - 84959. ⟨10.1109/access.2022.3196688⟩. ⟨hal-03861118⟩
13 Consultations
17 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More