XAI4Science2026 Workshop
Explainable Data Science and Machine Learning for the Sciences Workshop in conjunction with EDBT/ICDT 2026
Motivation
Research Topics
- Generative AI methods for automatically proposing new hypotheses compatible with available scientific data and domain knowledge
- Explanation methods for validating or contrasting scientific hypotheses by uncovering cause–effect relationships
- Interpretable AI methods to discover spatial and temporal dynamics in complex systems
- Formal verification to bridge the gap between data-driven decisions and domain-specific constraints
- Multimodal explanations using graphical (visual), symbolic (equations), and sentential (verbal) interfaces
- Quantitative evaluation of explanations' utility in scientific domains
- Exploratory processes of explanations involving complex interactions between human, technical, and organizational factors
Invited Speakers
Giovanni Stilo
Professor, Luiss Guido Carli University, Luiss Business School. — Keynote: "Advances and Future Perspectives in Graph Counterfactual Explanations"
ABSTRACT
Counterfactual explanations (CEs) have emerged as a crucial paradigm for understanding the behavior of AI systems. While CE techniques are relatively mature in areas such as tabular learning and computer vision, their extension to graph-structured data, known as Graph Counterfactual Explanations (GCEs), is still a developing yet rapidly advancing research direction. GCE methods aim to construct minimally altered graphs that change the model’s prediction while preserving structural coherence and semantic plausibility. This talk offers an overview of the field by introducing the conceptual foundations of GCE, describing the main families of explainers, and reviewing recent progress that spans perturbation-based approaches, global reasoning methods, dynamic-graph counterfactuals, and latent or spectral generative models. The talk provides practical tools and a visual comparison of representative techniques. The session concludes with a forward-looking discussion that highlights emerging research paths and open questions likely to shape the next phase of counterfactual explainability for graph-based learning.BIO
Giovanni Stilo is an Associate Professor in the Department of AI, Data, and Decision Sciences at Luiss Guido Carli University and a Core Faculty member of Luiss Business School. Previously, he served as an Associate Professor at the University of L’Aquila, where he coordinated the Master’s program in Applied Data Science, and he has held visiting research appointments at Yahoo Labs. His recent research focuses on trustworthy and responsible machine learning, with emphasis on machine unlearning, graph counterfactual explainability, and algorithmic fairness. He has introduced new methodologies for forget-set identification, robust stochastic graph generation for counterfactual reasoning, and ensemble strategies for graph explanations. He is the founder of the AIIM Lab (https://aiimlab.org/), a collaborative research group that investigates how to make machine-learning and graph-based AI systems more explainable, fair, privacy-aware (including support for machine unlearning), and ethically responsible. The lab develops platforms, frameworks, and infrastructure for graph analytics, bias mitigation, and data mining. Within this context, he leads the development of GRETEL (https://aiimlab.org/projects/gretel.html), a dedicated framework for generating and evaluating Graph Counterfactual Explanations, and ERASURE (https://aiimlab.org/short/erasure), a comprehensive framework for systematic and reproducible research in machine unlearning. He actively contributes to the scientific community through the organization of international conferences (https://aiimlab.org/events.html), participation in program committees, and editorial work in the fields of artificial intelligence, data mining, and machine learning.Paper Submission
Submission types
- Full papers — up to 6 pages (without references)
- Short papers — up to 4 pages (including references)
Submission process
- Prepare PDF following the provided template (LaTeX/Word templates).
- Upload via Microsoft CMT
- Review is double-blind — remove author names and affiliations from submissions.
Submission link: https://cmt3.research.microsoft.com/XAIForScience2026
The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.
Rules
Concerning Conflict of Interest (COI) and Duplicate Submissions and Novelty Requirements, we refer to the rules followed in the EDBT call for papers (see here for more information: LINK TO EDBT CONFERENCE RULES). Papers should be formatted according to the latest ACM Proceedings Format, without any changes to fonts, margins, inter-column spacing, style, footers, and so on. All manuscripts should be formatted according to the double-column ACM SIG conference proceedings format. The authors who need a LaTeX/Word template can refer to the following link: LATEX/WORD template, Overleaf authors can refer to the following link: OVERLEAF.Workshop Program
Below is the provisional program — times are local (Example Time).
Full program (with abstracts and poster locations) will be published here closer to the event.
Organisers
- Vassilis Christophides (ETIS, CNRS, ENSEA)
- Jin-Song Dong (National University of Singapore)
- Nicolas Labroche (Univ. of Tours)
- Michele Linardi (Publicity Chair) (CY Cergy Paris Université / ENSEA, ETIS Lab)
- Evaggelia Pitoura (Univ. of Ioannina, Archimedes Research Unit of Athena RC)
- Céline Robardet (INSA Lyon, LIRIS)
- Yongfeng Zhang (Rutgers University)
Program Committee
Our program committee consists of researchers who are leading experts on several topics encompassing data engineering, model understanding and artificial intelligence for the sciences. Below is a preliminary list of the committee members that already committed their participation :
- Julien Aligon, (Université Toulouse Capitole, IRIT Lab, SIG Team)
- Alexandre Chanson, (Université de Tours, LIFAT Lab)
- Emmanuel Doumard, (Université de Tours, LIFAT Lab)
- Moncef Garouani, (Université Toulouse Capitole, IRIT Lab, SIG Team)
- Leilani Gilpin, (University of California Santa Cruz, AIEA Lab)
- Riccardo Guidotti, (University of Pisa, KDD Lab)
- Matthijs van Leeuwen, (Leiden University, LIACS Lab)
- Michele Linardi, (CY Cergy Paris Université / ENSEA, ETIS Lab)
- Marie-Jeanne Lesot, (Sorbonne Université / LIP6)
- Patrick Marcel, (Université d'Orléans, LIFO Lab)
- Christophe Marsala, (Sorbonne Université / LIP6)
- Guillaume Renton, (ENSEA, ETIS Lab)
- Konstantinos Stefanidis, (Tampere University, Data Science Subunit)
- Simone Stumpf, (University of Glasgow, School of Computing Science)
- Juntao Tan, (Rutgers University, Computer Science Department)
- Aikaterini Tzompanaki, (CY Cergy Paris Université / ENSEA, ETIS Lab)
- Ntoutsi Eirini, (Bundeswehr University Munich, AIML)
Sponsors
Contact
For general enquiries, programme questions, or travel information, contact the organisers.
- Workshop email: vassilis.christophides@ensea.fr