November 4, 2019
In order to make Artificial Intelligence (AI) technologies more applicable to different work fields, new ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have to be explored. With their book “Explainable AI: Interpreting, Explaining and Visualizing Deep Learning”, researchers from Fraunhofer HHI, Technische Universität Berlin (TU Berlin), University of Oxford, and Technical University of Denmark (DTU) want to provide a comprised overview of the most important issues and ideas.
With the development of ‘intelligent’ systems, which can take decisions and perform autonomously, science hopes to achieve a faster and more consistent decisions-making process. However, an obstacle for a broader adoption of AI technology is the inherent risk that comes with giving up human control and oversight to ‘intelligent’ machines. Especially when AI technologies are supposed to fulfil sensitive tasks involving critical infrastructures and affecting human well-being or health. Therefore, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions.
To solve this, an additional step needs to be introduced: Before deploying an AI system, its behavior needs to be validated. Thus, leading to guarantees being established that it will continue to perform as expected when deployed in a real-world environment. As a method to achieve this, explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.
Wojciech Samek, Head of Machine Learning Group in the Video Coding and Analytics Department at Fraunhofer HHI, Grégoire Montavon, Research Associate at TU Berlin, Andrea Vedaldi, Associate Professor at the University of Oxford, Lars Kai Hansen, Professor at Technical University of Denmark, and Klaus-Robert Müller, Machine Learning Professor at TU Berlin, have now published the first edited book about XAI. “Explainable AI: Interpreting, Explaining and Visualizing Deep Learning” was published by Springer International Publishing in 2019 and provides a timely snapshot of algorithms, theory, and applications of interpretable, XAI and AI techniques, which have been proposed recently. The authors are reflecting the current discourse in the field and providing directions of future development.
More information on the book with the DOI: 10.1007/978-3-030-28954-6 can be found here .