Bridging the Gap Between Explainable AI and Uncertainty Quantification to Enhance Trustability

Dominik Seuß

After the tremendous advances of deep learning and other AI methods, more attention is flowing into other properties of modern approaches, such as interpretability, fairness, etc. combined in frameworks like Responsible AI. Two research directions, namely Explainable AI and Uncertainty Quantification are becoming more and more important, but have been so far never combined and jointly explored. In this paper, I show how both research areas provide potential for combination, why more research should be done in this direction and how this would lead to an increase in trustability in AI systems.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment