XAI4Science2026 Workshop

Explainable Data Science and Machine Learning for the Sciences Workshop in conjunction with EDBT/ICDT 2026

📍 Tampere, Finland
đź“… 03/24/2026
Submit a paper See program

Motivation

Over the last couple of decades, the increasing availability of advanced computational resources and big scientific data boosted data-driven methods in scientific discovery and innovation. From neuroscience and astrophysics, to medicine and pharmaceutics, chemistry and material sciences up to weather and climate sciences, scientists currently process large volumes of experimental data and employ data science and machine learning techniques to validate and generate scientific hypotheses. Unfortunately, existing AI systems used to engineer and analyse data are mainly opaque, i.e., it is difficult to understand why they return a specific output or what they could return if input data were slightly different. They typically made automated decisions by fixating on a particular hypothesis under investigation without providing evidence for or against it. Recent advances in explainable artificial intelligence (XAI) aim to bridge the gap between a human cognitive decision-making process and AI systems. However, XAI methods mainly focus on understanding AI model behavior rather than how to exploit it for discovering new human knowledge. Their impact in complex problem solving is currently limited by the lack of completeness, robustness and universality across AI models, data modalities and scientific pipelines. The XAI4Science workshop aims to bring together researchers, practitioners, and domain experts working at the intersection of data science, machine learning and scientific disciplines for discussing advances in XAI methods that can effectively and efficiently support scientific discovery. The workshop will include a wide range of explanation techniques (i) for analysing diverse data modalities (e.g., from image, to time series and graphs) (ii) using several AI models of increasing generality (e.g., trained from scratch, pre-trained or foundation models) (iii) via complex laboratory pipelines with scientists in the loop.

Research Topics

  • Generative AI methods for automatically proposing new hypotheses compatible with available scientific data and domain knowledge
  • Explanation methods for validating or contrasting scientific hypotheses by uncovering cause–effect relationships
  • Interpretable AI methods to discover spatial and temporal dynamics in complex systems
  • Formal verification to bridge the gap between data-driven decisions and domain-specific constraints
  • Multimodal explanations using graphical (visual), symbolic (equations), and sentential (verbal) interfaces
  • Quantitative evaluation of explanations' utility in scientific domains
  • Exploratory processes of explanations involving complex interactions between human, technical, and organizational factors

Invited Speakers

GS

Giovanni Stilo

Professor, Luiss Guido Carli University, Luiss Business School. — Keynote: "Advances and Future Perspectives in Graph Counterfactual Explanations"

ABSTRACT Counterfactual explanations (CEs) have emerged as a crucial paradigm for understanding the behavior of AI systems. While CE techniques are relatively mature in areas such as tabular learning and computer vision, their extension to graph-structured data, known as Graph Counterfactual Explanations (GCEs), is still a developing yet rapidly advancing research direction. GCE methods aim to construct minimally altered graphs that change the model’s prediction while preserving structural coherence and semantic plausibility. This talk offers an overview of the field by introducing the conceptual foundations of GCE, describing the main families of explainers, and reviewing recent progress that spans perturbation-based approaches, global reasoning methods, dynamic-graph counterfactuals, and latent or spectral generative models. The talk provides practical tools and a visual comparison of representative techniques. The session concludes with a forward-looking discussion that highlights emerging research paths and open questions likely to shape the next phase of counterfactual explainability for graph-based learning.

BIO

Giovanni Stilo is an Associate Professor in the Department of AI, Data, and Decision Sciences at Luiss Guido Carli University and a Core Faculty member of Luiss Business School. Previously, he served as an Associate Professor at the University of L’Aquila, where he coordinated the Master’s program in Applied Data Science, and he has held visiting research appointments at Yahoo Labs. His recent research focuses on trustworthy and responsible machine learning, with emphasis on machine unlearning, graph counterfactual explainability, and algorithmic fairness. He has introduced new methodologies for forget-set identification, robust stochastic graph generation for counterfactual reasoning, and ensemble strategies for graph explanations. He is the founder of the AIIM Lab (https://aiimlab.org/), a collaborative research group that investigates how to make machine-learning and graph-based AI systems more explainable, fair, privacy-aware (including support for machine unlearning), and ethically responsible. The lab develops platforms, frameworks, and infrastructure for graph analytics, bias mitigation, and data mining. Within this context, he leads the development of GRETEL (https://aiimlab.org/projects/gretel.html), a dedicated framework for generating and evaluating Graph Counterfactual Explanations, and ERASURE (https://aiimlab.org/short/erasure), a comprehensive framework for systematic and reproducible research in machine unlearning. He actively contributes to the scientific community through the organization of international conferences (https://aiimlab.org/events.html), participation in program committees, and editorial work in the fields of artificial intelligence, data mining, and machine learning.

Paper Submission

Submission types

  • Full papers — up to 6 pages (without references)
  • Short papers — up to 4 pages (including references)

Submission process

  1. Prepare PDF following the provided template (LaTeX/Word templates).
  2. Upload via Microsoft CMT
  3. Review is double-blind — remove author names and affiliations from submissions.

Submission link: https://cmt3.research.microsoft.com/XAIForScience2026

The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.

Rules

Concerning Conflict of Interest (COI) and Duplicate Submissions and Novelty Requirements, we refer to the rules followed in the EDBT call for papers (see here for more information: LINK TO EDBT CONFERENCE RULES). Papers should be formatted according to the latest ACM Proceedings Format, without any changes to fonts, margins, inter-column spacing, style, footers, and so on. All manuscripts should be formatted according to the double-column ACM SIG conference proceedings format. The authors who need a LaTeX/Word template can refer to the following link: LATEX/WORD template, Overleaf authors can refer to the following link: OVERLEAF.

Workshop Program

Below is the provisional program — times are local (Example Time).

TimeSessionSpeakers / Notes
09:00—18:00TBAOrganisers

Full program (with abstracts and poster locations) will be published here closer to the event.

Organisers

  • Vassilis Christophides (ETIS, CNRS, ENSEA)
  • Jin-Song Dong (National University of Singapore)
  • Nicolas Labroche (Univ. of Tours)
  • Michele Linardi (Publicity Chair) (CY Cergy Paris UniversitĂ© / ENSEA, ETIS Lab)
  • Evaggelia Pitoura (Univ. of Ioannina, Archimedes Research Unit of Athena RC)
  • CĂ©line Robardet (INSA Lyon, LIRIS)
  • Yongfeng Zhang (Rutgers University)

Program Committee

Our program committee consists of researchers who are leading experts on several topics encompassing data engineering, model understanding and artificial intelligence for the sciences. Below is a preliminary list of the committee members that already committed their participation :

  • Julien Aligon, (UniversitĂ© Toulouse Capitole, IRIT Lab, SIG Team)
  • Alexandre Chanson, (UniversitĂ© de Tours, LIFAT Lab)
  • Emmanuel Doumard, (UniversitĂ© de Tours, LIFAT Lab)
  • Moncef Garouani, (UniversitĂ© Toulouse Capitole, IRIT Lab, SIG Team)
  • Leilani Gilpin, (University of California Santa Cruz, AIEA Lab)
  • Riccardo Guidotti, (University of Pisa, KDD Lab)
  • Matthijs van Leeuwen, (Leiden University, LIACS Lab)
  • Michele Linardi, (CY Cergy Paris UniversitĂ© / ENSEA, ETIS Lab)
  • Marie-Jeanne Lesot, (Sorbonne UniversitĂ© / LIP6)
  • Patrick Marcel, (UniversitĂ© d'OrlĂ©ans, LIFO Lab)
  • Christophe Marsala, (Sorbonne UniversitĂ© / LIP6)
  • Guillaume Renton, (ENSEA, ETIS Lab)
  • Konstantinos Stefanidis, (Tampere University, Data Science Subunit)
  • Simone Stumpf, (University of Glasgow, School of Computing Science)
  • Juntao Tan, (Rutgers University, Computer Science Department)
  • Aikaterini Tzompanaki, (CY Cergy Paris UniversitĂ© / ENSEA, ETIS Lab)
  • Ntoutsi Eirini, (Bundeswehr University Munich, AIML)

Sponsors

Our sponsors go here: Sponsor 1 logo Sponsor 2 logo Institute logo

Contact

For general enquiries, programme questions, or travel information, contact the organisers.