S03 - Explainable, Robust and Trustworthy Machine Learning and its Applications

Chairs

Abstract

As the implementation and use of machine learning (ML) systems continues to gain significance in contemporary society, the focus is swiftly shifting from purely optimizing model performance towards building models that are both understandable and interpretable. This new emphasis stems from a growing need for applications that not only solve complex problems with high accuracy, but also provide clear, transparent insights into their decision-making processes for a range of end-users and stakeholders. In Europe, this has to be understood also in the context of new regulations such as the Artificial Intelligence Act, with its risk-related model transparency requirements. As a result, there is a surge of interest in techniques and methodologies that enable model explainability and interpretability, paving the way for more trustworthy and user-friendly AI solutions.

The aim of this special session is to gather researchers working on Explainable AI (xAI) in ML, placing a strong emphasis on the practical applications of this framework. Its primary goal is to present innovative methods that make ML models more interpretable, transparent, and trustworthy, while preserving their performance, but we invite contributions that go beyond theory, showcasing tangible real-world implementations in different application scenarios, with a particular interest in the field of computational neuroscience. By centering on application-driven insights, this session seeks to bridge the gap between basic research and operational solutions, ultimately aiming to steer the ML community toward more responsible and societally beneficial AI technologies.

We are seeking contributions that address practical applications, presenting innovative approaches and technological xAI advancements. Topics of interest include, but are not limited to:

  • Explainable methods in medicine and healthcare
  • Explainable methods in computational neuroscience
  • Business and public governance applications of xAI
  • Explainable biomedical knowledge discovery with ML
  • xAI in agriculture, forestry and environmental applications
  • xAI and human-computer interaction
  • xAI methods for linguistics & machine translation
  • Explainability in decision-support systems
  • Best practices for presenting model explanations to non-technical stakeholders
  • Auto-encoders & explainability of latent spaces
  • Causal inference & explanations
  • Post-hoc methods for explainability
  • Reinforcement learning for enhancing xAI systems
  • xAI for Deep Learning methods