Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intell...

Πλήρης περιγραφή

Λεπτομέρειες βιβλιογραφικής εγγραφής
Συγγραφή απο Οργανισμό/Αρχή: SpringerLink (Online service)
Άλλοι συγγραφείς: Samek, Wojciech (Επιμελητής έκδοσης, http://id.loc.gov/vocabulary/relators/edt), Montavon, Grégoire (Επιμελητής έκδοσης, http://id.loc.gov/vocabulary/relators/edt), Vedaldi, Andrea (Επιμελητής έκδοσης, http://id.loc.gov/vocabulary/relators/edt), Hansen, Lars Kai (Επιμελητής έκδοσης, http://id.loc.gov/vocabulary/relators/edt), Müller, Klaus-Robert (Επιμελητής έκδοσης, http://id.loc.gov/vocabulary/relators/edt)
Μορφή: Ηλεκτρονική πηγή Ηλ. βιβλίο
Γλώσσα:English
Έκδοση: Cham : Springer International Publishing : Imprint: Springer, 2019.
Έκδοση:1st ed. 2019.
Σειρά:Lecture Notes in Artificial Intelligence ; 11700
Θέματα:
Διαθέσιμο Online:Full Text via HEAL-Link
Πίνακας περιεχομένων:
  • Towards Explainable Artificial Intelligence
  • Transparency: Motivations and Challenges
  • Interpretability in Intelligent Systems: A New Concept?
  • Understanding Neural Networks via Feature Visualization: A Survey
  • Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation
  • Unsupervised Discrete Representation Learning
  • Towards Reverse-Engineering Black-Box Neural Networks
  • Explanations for Attributing Deep Neural Network Predictions
  • Gradient-Based Attribution Methods
  • Layer-Wise Relevance Propagation: An Overview
  • Explaining and Interpreting LSTMs
  • Comparing the Interpretability of Deep Networks via Network Dissection
  • Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison
  • The (Un)reliability of Saliency Methods
  • Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation
  • Understanding Patch-Based Learning of Video Data by Explaining Predictions
  • Quantum-Chemical Insights from Interpretable Atomistic Neural Networks
  • Interpretable Deep Learning in Drug Discovery
  • Neural Hydrology: Interpreting LSTMs in Hydrology
  • Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI
  • Current Advances in Neural Decoding
  • Software and Application Patterns for Explanation Methods.