Explainable AI by BAPC -- Before and After correction Parameter Comparison

Florian Sobieczky, Salma Mahmoud, Simon Neugebauer, Lukas Rippitsch, Manuela Geiß

By means of a local surrogate approach, an analytical method to yield explanations of AI-predictions in the framework of regression models is defined. In the case of the AI-model producing additive corrections to the predictions of a base model, the explanations are delivered in the form of a shift of its interpretable parameters as long as the AI- predictions are small in a rigorously defined sense. Criteria are formulated giving a precise relation between lost accuracy and lacking model fidelity. Two applications show how physical or econometric parameters may be used to interpret the action of neural network and random forest models in the sense of the underlying base model. This is an extended version of our paper presented at the ISM 2020 conference, where we first introduced our new approach BAPC.

Knowledge Graph



Sign up or login to leave a comment