Maximilian M. - SHAPtivating Insights: unravelling blackbox AI models

Поделиться
HTML-код
  • Опубликовано: 18 сен 2024
  • he advancements of (black box) artificial intelligence toolkits in recent years have made implementing AI models a commodity. However, model implementation is only the beginning during application development. Understanding, optimizing, and troubleshooting models is remaining as a constant challenge. In particular, the understanding (or explainability) of AI models is expected to become a requirement with the EU AI act, expected to pass this year.
    In this presentation, SHAP (SHapley Additive exPlanations), a model agnostic AI explainability framework is explained using the example of a tabular classification problem in Python. First, we look at the theory behind SHAP and demonstrate the practical implementation in Python. Second, the usage of the SHAP framework is showcased for both global and local explainability. For this the filtering of bank transactions for suspicious activities is used as an example. It will be shown how SHAP was used to perform feature selection, understand the model’s sensitivity to individual features and how single predictions can be explained. Last, a translation from SHAP to human readable output will be shown which was developed to explain the model predictions to the end user.
    Time breakdown:
    - General + domain introduction: 5 min
    - SHAP theory and Python framework: 5 min
    - Deep-dive into global and local explainability: 10 min
    - Converting SHAP values to human readable output: 5 min
    - Q&A: 5 min

Комментарии • 2

  • @wdvogtjr
    @wdvogtjr 8 месяцев назад +2

    I’m a huge fan of the Explainable Boosting Machine (EBM) algorithm. It offers fantastic performance and complete explainability (without the need to one-hot encode categorical features).

    • @rodrigoccamargos
      @rodrigoccamargos 5 месяцев назад

      I'm using EBM in my research and works very well on tabular data. One thing that could be improved is the visualization of feature importance by interpretML. Anyway, it is a great tool from Microsoft.