|
|
|
|
LEADER |
03418nam a22004935i 4500 |
001 |
978-3-319-01168-4 |
003 |
DE-He213 |
005 |
20151103123158.0 |
007 |
cr nn 008mamaa |
008 |
130623s2013 gw | s |||| 0|eng d |
020 |
|
|
|a 9783319011684
|9 978-3-319-01168-4
|
024 |
7 |
|
|a 10.1007/978-3-319-01168-4
|2 doi
|
040 |
|
|
|d GrThAP
|
050 |
|
4 |
|a Q342
|
072 |
|
7 |
|a UYQ
|2 bicssc
|
072 |
|
7 |
|a COM004000
|2 bisacsh
|
082 |
0 |
4 |
|a 006.3
|2 23
|
100 |
1 |
|
|a Hester, Todd.
|e author.
|
245 |
1 |
0 |
|a TEXPLORE: Temporal Difference Reinforcement Learning for Robots and Time-Constrained Domains
|h [electronic resource] /
|c by Todd Hester.
|
264 |
|
1 |
|a Heidelberg :
|b Springer International Publishing :
|b Imprint: Springer,
|c 2013.
|
300 |
|
|
|a XIV, 165 p. 55 illus. in color.
|b online resource.
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
347 |
|
|
|a text file
|b PDF
|2 rda
|
490 |
1 |
|
|a Studies in Computational Intelligence,
|x 1860-949X ;
|v 503
|
505 |
0 |
|
|a Introduction -- Background and Problem Specification -- Real Time Architecture -- The TEXPLORE Algorithm -- Empirical Evaluation -- Further Examination of Exploration -- Related Work -- Discussion and Conclusion -- TEXPLORE Pseudo-Code.
|
520 |
|
|
|a This book presents and develops new reinforcement learning methods that enable fast and robust learning on robots in real-time. Robots have the potential to solve many problems in society, because of their ability to work in dangerous places doing necessary jobs that no one wants or is able to do. One barrier to their widespread deployment is that they are mainly limited to tasks where it is possible to hand-program behaviors for every situation that may be encountered. For robots to meet their potential, they need methods that enable them to learn and adapt to novel situations that they were not programmed for. Reinforcement learning (RL) is a paradigm for learning sequential decision making processes and could solve the problems of learning and adaptation on robots. This book identifies four key challenges that must be addressed for an RL algorithm to be practical for robotic control tasks. These RL for Robotics Challenges are: 1) it must learn in very few samples; 2) it must learn in domains with continuous state features; 3) it must handle sensor and/or actuator delays; and 4) it should continually select actions in real time. This book focuses on addressing all four of these challenges. In particular, this book is focused on time-constrained domains where the first challenge is critically important. In these domains, the agent’s lifetime is not long enough for it to explore the domains thoroughly, and it must learn in very few samples.
|
650 |
|
0 |
|a Engineering.
|
650 |
|
0 |
|a Image processing.
|
650 |
|
0 |
|a Computational intelligence.
|
650 |
|
0 |
|a Robotics.
|
650 |
|
0 |
|a Automation.
|
650 |
1 |
4 |
|a Engineering.
|
650 |
2 |
4 |
|a Computational Intelligence.
|
650 |
2 |
4 |
|a Image Processing and Computer Vision.
|
650 |
2 |
4 |
|a Robotics and Automation.
|
710 |
2 |
|
|a SpringerLink (Online service)
|
773 |
0 |
|
|t Springer eBooks
|
776 |
0 |
8 |
|i Printed edition:
|z 9783319011677
|
830 |
|
0 |
|a Studies in Computational Intelligence,
|x 1860-949X ;
|v 503
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1007/978-3-319-01168-4
|z Full Text via HEAL-Link
|
912 |
|
|
|a ZDB-2-ENG
|
950 |
|
|
|a Engineering (Springer-11647)
|