Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intell...

Full description

Bibliographic Details
Corporate Author: SpringerLink (Online service)
Other Authors: Samek, Wojciech (Editor, http://id.loc.gov/vocabulary/relators/edt), Montavon, Grégoire (Editor, http://id.loc.gov/vocabulary/relators/edt), Vedaldi, Andrea (Editor, http://id.loc.gov/vocabulary/relators/edt), Hansen, Lars Kai (Editor, http://id.loc.gov/vocabulary/relators/edt), Müller, Klaus-Robert (Editor, http://id.loc.gov/vocabulary/relators/edt)
Format: Electronic eBook
Language:English
Published: Cham : Springer International Publishing : Imprint: Springer, 2019.
Edition:1st ed. 2019.
Series:Lecture Notes in Artificial Intelligence ; 11700
Subjects:
Online Access:Full Text via HEAL-Link
Table of Contents:
  • Towards Explainable Artificial Intelligence
  • Transparency: Motivations and Challenges
  • Interpretability in Intelligent Systems: A New Concept?
  • Understanding Neural Networks via Feature Visualization: A Survey
  • Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation
  • Unsupervised Discrete Representation Learning
  • Towards Reverse-Engineering Black-Box Neural Networks
  • Explanations for Attributing Deep Neural Network Predictions
  • Gradient-Based Attribution Methods
  • Layer-Wise Relevance Propagation: An Overview
  • Explaining and Interpreting LSTMs
  • Comparing the Interpretability of Deep Networks via Network Dissection
  • Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison
  • The (Un)reliability of Saliency Methods
  • Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation
  • Understanding Patch-Based Learning of Video Data by Explaining Predictions
  • Quantum-Chemical Insights from Interpretable Atomistic Neural Networks
  • Interpretable Deep Learning in Drug Discovery
  • Neural Hydrology: Interpreting LSTMs in Hydrology
  • Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI
  • Current Advances in Neural Decoding
  • Software and Application Patterns for Explanation Methods.