Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intell...

Full description

Bibliographic Details
Corporate Author: SpringerLink (Online service)
Other Authors: Samek, Wojciech (Editor, http://id.loc.gov/vocabulary/relators/edt), Montavon, Grégoire (Editor, http://id.loc.gov/vocabulary/relators/edt), Vedaldi, Andrea (Editor, http://id.loc.gov/vocabulary/relators/edt), Hansen, Lars Kai (Editor, http://id.loc.gov/vocabulary/relators/edt), Müller, Klaus-Robert (Editor, http://id.loc.gov/vocabulary/relators/edt)
Format: Electronic eBook
Language:English
Published: Cham : Springer International Publishing : Imprint: Springer, 2019.
Edition:1st ed. 2019.
Series:Lecture Notes in Artificial Intelligence ; 11700
Subjects:
Online Access:Full Text via HEAL-Link
LEADER 05900nam a2200601 4500
001 978-3-030-28954-6
003 DE-He213
005 20191114021317.0
007 cr nn 008mamaa
008 190829s2019 gw | s |||| 0|eng d
020 |a 9783030289546  |9 978-3-030-28954-6 
024 7 |a 10.1007/978-3-030-28954-6  |2 doi 
040 |d GrThAP 
050 4 |a Q334-342 
072 7 |a UYQ  |2 bicssc 
072 7 |a COM004000  |2 bisacsh 
072 7 |a UYQ  |2 thema 
082 0 4 |a 006.3  |2 23 
245 1 0 |a Explainable AI: Interpreting, Explaining and Visualizing Deep Learning  |h [electronic resource] /  |c edited by Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller. 
250 |a 1st ed. 2019. 
264 1 |a Cham :  |b Springer International Publishing :  |b Imprint: Springer,  |c 2019. 
300 |a XI, 439 p. 152 illus., 119 illus. in color.  |b online resource. 
336 |a text  |b txt  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
347 |a text file  |b PDF  |2 rda 
490 1 |a Lecture Notes in Artificial Intelligence ;  |v 11700 
505 0 |a Towards Explainable Artificial Intelligence -- Transparency: Motivations and Challenges -- Interpretability in Intelligent Systems: A New Concept? -- Understanding Neural Networks via Feature Visualization: A Survey -- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation -- Unsupervised Discrete Representation Learning -- Towards Reverse-Engineering Black-Box Neural Networks -- Explanations for Attributing Deep Neural Network Predictions -- Gradient-Based Attribution Methods -- Layer-Wise Relevance Propagation: An Overview -- Explaining and Interpreting LSTMs -- Comparing the Interpretability of Deep Networks via Network Dissection -- Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison -- The (Un)reliability of Saliency Methods -- Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation -- Understanding Patch-Based Learning of Video Data by Explaining Predictions -- Quantum-Chemical Insights from Interpretable Atomistic Neural Networks -- Interpretable Deep Learning in Drug Discovery -- Neural Hydrology: Interpreting LSTMs in Hydrology -- Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI -- Current Advances in Neural Decoding -- Software and Application Patterns for Explanation Methods. 
520 |a The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI. 
650 0 |a Artificial intelligence. 
650 0 |a Optical data processing. 
650 0 |a Computers. 
650 0 |a Computer security. 
650 0 |a Computer organization. 
650 1 4 |a Artificial Intelligence.  |0 http://scigraph.springernature.com/things/product-market-codes/I21000 
650 2 4 |a Image Processing and Computer Vision.  |0 http://scigraph.springernature.com/things/product-market-codes/I22021 
650 2 4 |a Computing Milieux.  |0 http://scigraph.springernature.com/things/product-market-codes/I24008 
650 2 4 |a Systems and Data Security.  |0 http://scigraph.springernature.com/things/product-market-codes/I28060 
650 2 4 |a Computer Systems Organization and Communication Networks.  |0 http://scigraph.springernature.com/things/product-market-codes/I13006 
700 1 |a Samek, Wojciech.  |e editor.  |0 (orcid)0000-0002-6283-3265  |1 https://orcid.org/0000-0002-6283-3265  |4 edt  |4 http://id.loc.gov/vocabulary/relators/edt 
700 1 |a Montavon, Grégoire.  |e editor.  |4 edt  |4 http://id.loc.gov/vocabulary/relators/edt 
700 1 |a Vedaldi, Andrea.  |e editor.  |0 (orcid)0000-0003-1374-2858  |1 https://orcid.org/0000-0003-1374-2858  |4 edt  |4 http://id.loc.gov/vocabulary/relators/edt 
700 1 |a Hansen, Lars Kai.  |e editor.  |0 (orcid)0000-0003-0442-5877  |1 https://orcid.org/0000-0003-0442-5877  |4 edt  |4 http://id.loc.gov/vocabulary/relators/edt 
700 1 |a Müller, Klaus-Robert.  |e editor.  |0 (orcid)0000-0002-3861-7685  |1 https://orcid.org/0000-0002-3861-7685  |4 edt  |4 http://id.loc.gov/vocabulary/relators/edt 
710 2 |a SpringerLink (Online service) 
773 0 |t Springer eBooks 
776 0 8 |i Printed edition:  |z 9783030289539 
776 0 8 |i Printed edition:  |z 9783030289553 
830 0 |a Lecture Notes in Artificial Intelligence ;  |v 11700 
856 4 0 |u https://doi.org/10.1007/978-3-030-28954-6  |z Full Text via HEAL-Link 
912 |a ZDB-2-SCS 
912 |a ZDB-2-LNC 
950 |a Computer Science (Springer-11645)