Explainable Observer-Classifier for Explainable Binary Decisions

Stephan Alaniz, Zeynep Akata

Explanations help develop a better understanding of the rationale behind the predictions of a deep neural network and improve trust. We propose an explainable observer-classifier framework that exposes the steps taken through the decision-making process in a transparent manner. Instead of assigning a label to an image in a single step, our model makes iterative binary sub-decisions, and as a byproduct reveals a decision tree in the form of an introspective explanation. In addition, our model creates rationalizations as it assigns each binary decision a semantic meaning in the form of attributes imitating human-annotations. On six benchmark datasets with increasing size and granularity, our model outperforms classical decision-trees and generates easy-to-understand binary decision sequences explaining the network's predictions.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment