Why AI cannot be explained | Rickard Brüel-Gabrielsson | TEDxBoston

Поделиться
HTML-код
  • Опубликовано: 1 ноя 2024

Комментарии • 7

  • @ChituOkoli
    @ChituOkoli Год назад +1

    It sounds like he is equating explainability to trust. While trust is important, there is a lot more to XAI than just trust. A huge part of XAI is actionability--receiving guidance for what we might change in the data to get the outcomes that we want. While that might require some trust in the model, that's not the main point--we don't need the test of time to increase that kind of explainability. It is relevant right now. So, AI CAN be explained from that perspective.

    • @KellieAguado
      @KellieAguado 8 месяцев назад +1

      I'd like to know if anything has changed from the last time he gave this talk or maybe what are his predictions for XAI in 2024.

    • @Red_Blue_Green
      @Red_Blue_Green 27 дней назад

      Great comment! The talk is claiming that AI is provably unexplainable as XAI traditionally defines it. The second part is arguing the best way to keep developing AI sustainably-accepting the fact that AI cannot be explained. In this second part, I am loosely equating trust with our willingness to develop and use AI

  • @sorjef
    @sorjef Год назад +1

    Thank you for the talk. With the questions discussed it gives an opportunity to ponder and build some greatmental models around the concept. Also recommed this author's Future of AI set of lectures!

  • @sajeeshsaji4623
    @sajeeshsaji4623 Год назад +1

    Wow it's empty 🤣🤣🤣