Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of DNN Explanations

Поделиться
HTML-код
  • Опубликовано: 24 май 2023
  • Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations (CVPR 2023)
    In our recent paper for CVPR 2023, we analyse properties of top-down randomization proposed in "Sanity Checks for Saliency Maps" (NeurIPS 2018) because we observe discrepancies between Sanity Checks and Measures of explanation faithfulness in their ranking of attribution methods. We find that - contrary to intuition - model randomization only alters prediction behavior to a certain extent, and that the similarity scores used to quantify the effects of randomization are sensitive to noise in attributions. As a result, sanity checks should be used as binary tests indicating whether an explanation method is affected by model randomization or not - rather than as a tool for ranking attribution methods. In general, we strongly advise against the use of any singular evaluation method as a sole criterion for ranking attributions.
    Find the paper here: openaccess.thecvf.com/content...
    Subscribe for more content like this!
    Follow us on social media:
    LinkedIn: / fraunhofer-hhi
    Twitter: / fraunhoferhhi
    Instagram: / fraunhofer_hhi
  • НаукаНаука

Комментарии •