spelling |
oapen-20.500.12657-260402021-11-10T07:56:22Z Elements of Causal Inference Peters, Jonas Janzing, Dominik Schölkopf, Bernhard Causality machine learning statistical models probability theory statistics assumptions cause-effect models interventions counterfactuals SCMs cause-effect models identifiability semi-supervised learning covariate shift multivariate causal models markov faithfulness causal minimality do-calculus falsifiability potential outcomes algorithmic independence half-sibling regression episodic reinforcement learning domain adaptation simpson's paradox conditional independence computer science bic Book Industry Communication::U Computing & information technology::UM Computer programming / software development::UMS Mobile & handheld device programming / Apps programming bic Book Industry Communication::U Computing & information technology::UY Computer science::UYQ Artificial intelligence::UYQM Machine learning bic Book Industry Communication::U Computing & information technology::UY Computer science::UYQ Artificial intelligence::UYQN Neural networks & fuzzy systems A concise and self-contained introduction to causal inference, increasingly important in data science and machine learning.The mathematization of causality is a relatively recent development, and has become increasingly important in data science and machine learning. This book offers a self-contained and concise introduction to causal models and how to learn them from data. After explaining the need for causal models and discussing some of the principles underlying causal inference, the book teaches readers how to use causal models: how to compute intervention distributions, how to infer causal models from observational and interventional data, and how causal ideas could be exploited for classical machine learning problems. All of these topics are discussed first in terms of two variables and then in the more general multivariate case. The bivariate case turns out to be a particularly hard problem for causal learning because there are no conditional independences as used by classical methods for solving multivariate cases. The authors consider analyzing statistical asymmetries between cause and effect to be highly instructive, and they report on their decade of intensive research into this problem. The book is accessible to readers with a background in machine learning or statistics, and can be used in graduate courses or as a reference for researchers. The text includes code snippets that can be copied and pasted, exercises, and an appendix with a summary of the most important technical concepts. 2019-01-20 23:42:51 2020-04-01T10:57:59Z 2020-04-01T10:57:59Z 2017 book 1004045 OCN: 1100492112 9780262037310 http://library.oapen.org/handle/20.500.12657/26040 eng Adaptive Computation and Machine Learning series application/pdf n/a 11283.pdf The MIT Press f49dea23-efb1-407d-8ac0-6ed2b5cb4b74 9780262037310 288 Cambridge open access
|
description |
A concise and self-contained introduction to causal inference, increasingly important in data science and machine learning.The mathematization of causality is a relatively recent development, and has become increasingly important in data science and machine learning. This book offers a self-contained and concise introduction to causal models and how to learn them from data. After explaining the need for causal models and discussing some of the principles underlying causal inference, the book teaches readers how to use causal models: how to compute intervention distributions, how to infer causal models from observational and interventional data, and how causal ideas could be exploited for classical machine learning problems. All of these topics are discussed first in terms of two variables and then in the more general multivariate case. The bivariate case turns out to be a particularly hard problem for causal learning because there are no conditional independences as used by classical methods for solving multivariate cases. The authors consider analyzing statistical asymmetries between cause and effect to be highly instructive, and they report on their decade of intensive research into this problem. The book is accessible to readers with a background in machine learning or statistics, and can be used in graduate courses or as a reference for researchers. The text includes code snippets that can be copied and pasted, exercises, and an appendix with a summary of the most important technical concepts.
|