Interpretable Machine Learning with Bitonic Generalized Additive Models and Automatic Feature Construction
Résumé
In many machine learning applications, interpretable models are necessary for the sake of trust or for further understanding the patterns in the data. In particular, scientists often want models that elucidate knowledge and therefore may lead to new discoveries. Currently, Generalized Additive Models (GAM) are gaining interest in other application domains because of their ability to fit the data well while at the same time being intelligible. Moreover, prior domain-specific knowledge is often valuable to guide the learning.
In this work, extensions and generalizations of GAM are proposed to incorporate prior knowledge during the learning phase. Specifically, the fitting method for GAM is modified so that it can fit the data with bitonic functions. In physics for instance, the most discriminative variables often present specific distributions with respect to the target variable, especially peaking (i.e. bitonic) distributions. An algorithm is also described to build automatically bitonic high-level features to be used in the GAM terms. Experiments on three physics datasets are used to validate these ideas in conjunction with physics scientists.
Mots clés
artificial intelligence
machine learning
online learning
interpretable model
Trustworthy Artificial intelligence
pattern understanding
interpretability
Generalized Additive Models (GAM)
prior domain-specific knowledge
learning guiding
GAM extension
GAM generalization
fitting method
bitonic function
discriminative variable
specific distribution
target variable
bitonic distribution
bitonic high-level features
Origine : Fichiers produits par l'(les) auteur(s)