About
Deep Reinforcement Learning (DRL) has recently made remarkable progress in solving complex tasks in several application domains, such as games, finance, autonomous driving, and recommendation systems. However, the black-box nature of deep neural networks and the complex interaction among various factors, such as the environment, reward policy, and state representation, raise challenges in understanding and interpreting DRL models’ decision-making processes. To address these issues, the workshop aims to explore the intersection of DRL with another important research area in artificial intelligence: Explainable Artificial Intelligence (XAI). XAI has become a crucial topic, aiming to enhance the accountability, trustworthiness, and accessibility of autonomous systems.
The workshop aims to bring together researchers, practitioners, and experts from both communities (DRL and XAI) by primarily focusing on methods, techniques, and frameworks that enhance the explainability and interpretability of DRL algorithms. Additionally, we will work towards defining standardized metrics and protocols to evaluate the performance and transparency of autonomous systems.
Topics
The topics include but are not limited to:
- XAI methods for or applied to Deep Learning (even if it does not involve reinforcement learning)
- Evaluation of XAI methods
- Self-Explainable Deep Reinforcement Learning
- Post-hoc methods for Deep Reinforcement Learning
- XAI-based Augmentation for Deep Reinforcement Learning
- Policies Interpretation
- Current-trend and Challenges in explaining Deep Reinforcement Learning
- Reinforcement Learning-based XAI methods
- Self-Explainable Deep Learning
- Interpreting Reinforcement Learning
- Debugging Deep Reinforcement Learning using XAI
- Applications of Deep Reinforcement Learning combined with XAI to real-world tasks
- Position papers on the topic of the workshop.
Note: XAI methods applied to deep reinforcement learning models will be prefered for the selection of contribution do be presented during the workshop in case of bordline decisions.
Schedule
08:50 a.m. - 9:00 a.m.
|
Welcome
(Opening)
|
|
09:00 a.m. - 9:45 a.m.
|
A View From Somewhere: Decomposing the Dimensions of Human Decision Making
(Invited Talk)
|
Jerone Andrews |
09:45 a.m. - 10:05 a.m.
|
Clustered Policy Decision Ranking
(Contributed Paper)
|
Mark Levin and Hana Chockler |
10:05 a.m. - 10:25 a.m.
|
Self-Supervised Interpretable Sensorimotor Learning via Latent Functional Modularity
(Contributed Paper)
|
Hyunki Seong and Hyunchul Shim |
10:30 a.m. - 11:00 a.m.
|
Coffee Break
|
|
11:00 a.m. - 11:45 a.m.
|
Toward Human-Centered Explainable Artificial Intelligence
(Invited Talk)
|
Mark Riedl |
11:45 a.m. - 12:05 a.m.
|
Interpretable Brain-Inspired Representations Improve RL Performance on Visual Navigation Tasks
(Contributed Paper)
|
Moritz Lange, Raphael C. Engelhardt, Wolfgang Konen, Laurenz Wiskott |
12:05 a.m. - 12:25 a.m.
|
Efficiently Quantifying Individual Agent Importance in Cooperative MARL
(Contributed Paper)
|
Omayma Mahjoub, Ruan de Kock, Siddarth Singh, Wiem Khlif, Abidine Vall, Kale-ab Tessera, Rihab Gorsane, Arnu Pretorius |
12:30 p.m. - 02:00 p.m.
|
Break (Lunch)
|
|
02:00 p.m. - 02:45 p.m.
|
Toward Actionable Explanations for Autonomy
(Invited Talk)
|
Melinda Gervasio |
02:45 p.m. - 03:00 p.m.
|
Closing Remarks
|
|
|
Poster Session
|
|
03:30 p.m. - 04:00 p.m.
|
Coffee break
|
|
04:00 p.m. - 05:00 p.m.
|
Poster Session (contd.)
|
|
Pre-Recorded
|
Explaining RL Agents from the Lens of Perception, Memory, and Uncertainties (LINK)
(Invited Talk)
|
Anurag Koul |
Invited Speakers
Georgia Institute of Technology
Toward Human-Centered Explainable Artificial Intelligence
Microsoft Research
Explaining RL Agents from the Lens of Perception, Memory, and Uncertainties
Organization
Organizers
Accepted Papers
-
Closed Drafting as a Case Study for First-Principle Interpretability, Memory, and Generalizability in Deep Reinforcement Learning. Ryan Rezai and Jason Wang
[Paper] -
Clustered Policy Decision Ranking. Mark Levin and Hana Chockler
[Paper] -
Efficiently Quantifying Individual Agent Importance in Cooperative MARL. Omayma Mahjoub, Ruan de Kock, Siddarth Singh, Wiem Khlif, Abidine Vall, Kale-ab Tessera, Rihab Gorsane, Arnu Pretorius
[Paper] -
How much can change in a year? Revisiting Evaluation in Multi-Agent Reinforcement Learning. Siddarth Singh, Omayma Mahjoub, Ruan John de Kock, Wiem Khlifi, Abidine Vall, Kale-ab Tessera, Arnu Pretorius
[Paper] -
On Diagnostics for Understanding Agent Training Behaviour in Cooperative MARL. Wiem Khlif, Siddarth Singh, Omayma Mahjoub, Ruan John de Kock, Abidine Vall, Rihab Gorsane, Arnu Pretorius
[Paper] -
Interpretable Brain-Inspired Representations Improve RL Performance on Visual Navigation Tasks. Moritz Lange, Raphael C. Engelhardt, Wolfgang Konen, Laurenz Wiskott
[Paper] -
Explainable Reinforcement Learning for Alzheimer’s Disease Progression Prediction: a SHAP-based Approach. Raja Farrukh Ali, Ayesha Farooq, John Woods, Emmanuel Adeniji, Vinny Sun, William Hsu
[Paper] -
Self-Supervised Interpretable Sensorimotor Learning via Latent Functional Modularity. Hyunki Seong, Hyunchul Shim
[Paper]
Call for Papers
We solicit submissions of previously unpublished papers, both as short and full papers. Short papers are up to 4 pages max without any supplemental material associated with. Full papers are up to 7 pages and can be associated with supplementary materials (unlimited pages for supplemental material) attached at the end of the manuscript. Note that looking at supplementary material is at the discretion of the reviewers. The references pages and the supplemental materials are not considered in the calculation of pages, so you can use unlimited references in both the cases.
Submissions have to be novel contributions covering any topic listed below. We don’t accept work that has been already accepted or published to other venues before the submission deadline, or that is presented at the main AAAI conference, including as part of an invited talk.
Papers must be submitted through the open review system (LINK)). This workshop is not archival. Therefore, papers submitted to the workshop can be submitted to future conferences (e.g. ICML, IJCAI) if the acceptance notification comes after the workshop date (February, 27).
We encourage the authors to link a anonymized repository containing the code to replicate the results inside the corpus of the paper. While this is not a mandatory requirement, it will be positively taken in account during the reviewing process and the selection of the contributed talks. You can use Anonymous Github or you can upload your repository on a service that allows anonymity (e.g. GDrive allows anonymous links).
Submissions must be in an anonymized paper format following the same template of the AAAI track (see HERE). They will undergo double-blind peer review. Any data included in the submission (paper, supplemental material, linked code) must be anonymized.
Accepted works will be presented as contributed talks or as posters depending on schedule constraints. It is mandatory that at least one of the authors will attend the workshop and present its work during the contributed talks and the poster session.
Important Dates
Submission system opens: Oct 15 11:59 PM GMT, 2023
Submission deadline: , ~~Nov 15 Nov 21st 11:59 PM GMT, 2023 (Extended!!!)~~
Notification date: Dec 10 11:59 PM GMT, 2023
Workshop: Feb 27 09:00 AM GMT-7, 2023
Submission Link: OpenReview
FAQ
-
Q: My work is a model-agnostic XAI method and one of the tests has been applied to deep learning but without considering reinforcement learning. Does it fit the workshop’s topic?
A: Yes, it does. -
Q: My work is a XAI method tailored for deep learning but without considering reinforcement learning. Does it fit the workshop’s topic?
A: Yes, it does. -
Q: My work is a XAI method tailored for a machine learning model different deep learning (e.g. decision tree). Does it fit the workshop’s topic?
A: We plan to allocate the first part of the workshop to an introduction to XAI methods. Therefore a limited number of slots could be allocated to this kind of work if needed (e.g. strong submissions from this set of works, low number of sumbission covering RL, etc.). We invite you to submit your work even if the method is applied to a different kind of machine learning model. -
Q: Will rejected papers be displayed on OpenReview?
A: No they will not be displayed on OpenReview. -
Q: Will accepted papers be displayed on OpenReview?
A: Yes, they will be displayed on OpenReview. -
Q: Will reviews be public after thre review stage?
A: No, reviews will be visible only to Authors, Program Chairs, and reviewers. -
Q: Is the paper archived in any proceedings?
A: No, there will be no official proceedings. We will only host the papers on our website. -
Q: Can I attend the workshop without being registered for AAAI 2024 conference?
A: The workshop is hosted by AAAI. Therefore, you have to be registered for the conference of AAAI 2024 in order to attend the talks and the poster session of our workshop. -
Q: Is there any additional cost associated with the acceptance of my paper?
A: There are no additional costs associated with our workshop.
Program Committee
- Aaquib Tabrez, University of Colorado Boulder
- Alexander Binder, Singapore Institute of Technology
- Andrea Fanti, Sapienza University of Rome
- Andrew Silva, Georgia Institute of Technology
- Bettina Finzel, University of Bamberg
- Chaofan Chen, University of Maine
- Daniel Hein, Siemens AG
- David V. Pynadath, University of Southern California
- Di Wang, University of Illinois at Chicago
- Dmitry Gnatyshak, Barcelona Supercomputing Center
- Erico Tjoa, Stanford University
- Eunji Kim, Seoul National University
- Francisco Cruz, University of New South Wales Sydney
- Fredrik Heintz, Linköping University
- George Vouros, University of Piraeus
- Giorgio Angelotti, ISAE-Supaero
- Ishan Durugkar, Sony AI
- James MacGlashan, Sony AI
- Jaesik Choi, KAIST
- Jasmina Gajcin, Trinity College Dublin
- Jasper van der Waa, Researcher at TNO
- José Antonio Oramas Mogrovejo, University of Antwerp
- Juan Marcelo Parra Ullauri, University of Bristol
- Lindsay Sanneman, MIT
- Marco Valentino, Idiap Research Institute
- Moritz Lange, Ruhr University Bochum
- Nabil Aouf, City University of London
- Pradyumna Tambwekar, Georgia Institute of Technology
- Randy Goebel, University of Alberta
- Raphael C. Engelhardt, TH Koln
- Riccardo Guidotti, University of Pisa
- Sarra Alqahtani, Wake Forest University
- Satyapriya Krishna, Harvard University
- Shruti Mishra, Sony AI
- Suryabhan Singh Hada, University of California
- Tianpei Yang, Tianjin University
- Tobias Huber, Augsburg University
- Tom Bewley, University of Bristol
- Umang Bhatt, NYU’s Center for Data Science
- Wojciech Samek, TU Berlin
- Wolfgang Stammer, Technical University Darmstadt
- Xiangyu Peng, Georgia Institute of Technology
- Xuan Chen, Purdue University
- Yotam Amitai, Technion Israel Institute of Technology
- Ziheng Chen, Stony Brook University
Contacts
If you have any questions feel free to contact us at any of the following email addresses:
- ragno AT diag.uniroma1.it
- mproietti AT diag.uniroma1.it
- elochang AT ucsc.edu