Generating detailed saliency maps using model-agnostic methods

Maciej Sakowicz

The emerging field of Explainable Artificial Intelligence focuses on researching methods of explaining the decision making processes of complex machine learning models. In the field of explainability for Computer Vision, explanations are provided as saliency maps, which visualize the importance of individual pixels of the input w.r.t. the model's prediction. In this work we focus on a perturbation-based, model-agnostic explainability method called RISE, elaborate on observed shortcomings of its grid-based approach and propose two modifications: replacement of square occlusions with convex polygonal occlusions based on cells of a Voronoi mesh and addition of an informativeness guarantee to the occlusion mask generator. These modifications, collectively called VRISE (Voronoi-RISE), are meant to, respectively, improve the accuracy of maps generated using large occlusions and accelerate convergence of saliency maps in cases where sampling density is either very low or very high. We perform a quantitative comparison of accuracy of saliency maps produced by VRISE and RISE on the validation split of ILSVRC2012, using a saliency-guided content insertion/deletion metric and a localization metric based on bounding boxes. Additionally, we explore the space of configurable occlusion pattern parameters to better understand their influence on saliency maps produced by RISE and VRISE. We also describe and demonstrate two effects observed over the course of experimentation, arising from the random sampling approach of RISE: "feature slicing" and "saliency misattribution". Our results show that convex polygonal occlusions yield more accurate maps for coarse occlusion meshes and multi-object images, but improvement is not guaranteed in other cases. The informativeness guarantee is shown to increase the convergence rate without incurring a significant computational overhead.

Knowledge Graph



Sign up or login to leave a comment