Reveal to Revise: an XAI-supported AI life cycle

Поделиться
HTML-код
  • Опубликовано: 12 окт 2023
  • Curious whether your AI model is biased or uses spurious features?
    Or do you want to know how to unlearn an AI bias and validate your model?
    Our researcher Maximilian Dreyer is presenting his work as a joint first author with Frederik Pahde, titled Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models - as published during MICCAI 2023 in Vancouver.
    Reveal to Revise (R2R) describes an XAI-supported AI life cycle consisting of steps for
    ● (0) model training,
    ● (1) identification of model weaknesses,
    ● (2) artifact modeling and annotation,
    ● and (3 to 0) model revision strategies in continued training, but now with additional information.
    Want to know more?
    ● check out the paper at link.springer.com/chapter/10.... or the preprint (Green Open Access) arxiv.org/abs/2303.12641
    ● try out R2R yourself using the code at github.com/maxdreyer/Reveal2R... to reproduce our results or fix your own models
    Among others, this work acknowledges funding from the European Union's European Union's Horizon 2020 research and innovation programme under grant agreement no. 965221 (iToBoS).
    Subscribe for more content like this!
    Follow us on social media:
    LinkedIn: / fraunhofer-hhi
    Twitter: / fraunhoferhhi
    Instagram: / fraunhofer_hhi
  • НаукаНаука

Комментарии •