Bridging the Gap: Exploring Explainability in Autonomous and Trusted Computing
A workshop affiliated with IEEE ATC 2024
As technology continues to advance, autonomous and trusted computing systems are becoming increasingly prevalent in various domains, including healthcare, finance, transportation, and more.
However, with this advancement comes the challenge of ensuring that these systems are not only efficient and reliable but also transparent and understandable to users and stakeholders.
Explainability in autonomous and trusted computing has emerged as a crucial area of research and development to address this challenge.
This workshop aims to explore the significance, methods, and implications of explainability in autonomous and trusted computing systems.
Through interactive discussions, presentations, and hands-on activities, participants will share their insights regarding the importance of explainability, current research trends, and practical approaches to enhance transparency and trust in autonomous systems.
Workshop Objectives
- To understand the concept of explainability in the context of autonomous and trusted computing.
- To explore the importance of explainability for users, developers, and other stakeholders.
To discuss current research and development efforts in the field of explainable autonomous systems.
- To identify challenges and opportunities in designing and implementing explainable autonomous and trusted computing systems.
- To facilitate knowledge sharing and collaboration among researchers, practitioners, and industry professionals in the field of autonomous systems.
Workshop Topics
The workshop will cover, but is not limited to, the following topics:
- Understanding Explainability: Concepts and Definitions
- The Importance of Explainability in Autonomous Systems
- Ethical Considerations in Explainable AI
- Explainable Machine Learning Models for Autonomous Systems
- Human-Centered Design Approaches to Explainability
- Transparency vs. Accuracy: Balancing Trade-offs in Autonomous Computing
- Case Studies: Real-World Applications of Explainable Autonomous Systems
- Explainability Techniques: From Post-hoc Interpretation to Model Design
- Visualizing and Communicating Decisions taken by Autonomous Systems
- Legal and Regulatory Perspectives on Explainability
- User Trust and Acceptance in Explainable Autonomous Systems
- Explainability in Multi-Agent Systems and Collaborative Robotics
- Addressing Bias and Fairness in Explainable AI
- Evaluating the Effectiveness of Explainability Methods
- Security Implications of Explainable Autonomous Systems
- Educational Initiatives for Teaching Explainable AI Concepts
- Future Directions in Explainable Autonomous and Trusted Computing
Workshop Organizers
- Dr. Hanwei Zhang, a Postdoctoral Researcher at Saarland University in Germany, working on the project Explainable Intelligent Systems, as well as a project about adversarial attacks against 3D object detection in autonomous driving.
- Prof. Holger Hermanns, Professor of Computer Science at Saarland University in Germany, Chair of Dependable Systems and Software, member of Academia Europaea, spokesperson of the Center for Perspicuous Computing, and awardee of multiple grants by the European Research Council.
Important Dates
- Submission Deadline: September 1, 2024
- Notification of Acceptance: October 7, 2024
- Camera-ready Version: October 14, 2024
Workshop Proceedings
Accepted papers will be included in the regular conference proceedings of ATC 2024 and published by IEEE.
Submission Instructions
Please submit your paper to the workshop track (Track 10: BridgeXT) via EDAS.
Each workshop paper must be 4 to 8 pages in length, following the IEEE Computer Society Proceedings Format, which includes tables, figures, references, and appendices. All papers must be submitted electronically in PDF format through the designated websites.
Conference
Workshop Support
This workshop is supported by VolkswagenStiftung
as part of the EIS Project — Grant AZ 98514.