Reinforcement learning and approximate dynamic programming for feedback control /
"Reinforcement learning (RL) and adaptive dynamic programming (ADP) has been one of the most critical research fields in science and engineering for modern complex systems. This book describes the latest RL and ADP techniques for decision and control in human engineered systems, covering both s...
Άλλοι συγγραφείς: | , |
---|---|
Μορφή: | Ηλ. βιβλίο |
Γλώσσα: | English |
Έκδοση: |
Hoboken, New Jersey :
IEEE Press,
[2012]
|
Σειρά: | IEEE series on computational intelligence.
|
Θέματα: | |
Διαθέσιμο Online: | Full Text via HEAL-Link |
Πίνακας περιεχομένων:
- Series Page; Title Page; Copyright; Preface; Contributors; Part I: Feedback Control Using RL And ADP; Chapter 1: Reinforcement Learning and Approximate Dynamic Programming (RLADP)-Foundations, Common Misconceptions, and the Challenges Ahead; 1.1 Introduction; 1.2 What is RLADP?; 1.3 Some Basic Challenges in Implementing ADP; Disclaimer; References; Chapter 2: Stable Adaptive Neural Control of Partially Observable Dynamic Systems; 2.1 Introduction; 2.2 Background; 2.3 Stability Bias; 2.4 Example Application; References
- Chapter 3: Optimal Control of Unknown Nonlinear Discrete-Time Systems Using the Iterative Globalized Dual Heuristic Programming Algorithm3.1 Background Material; 3.2 Neuro-Optimal Control Scheme Based on the Iterative ADP Algorithm; 3.3 Generalization; 3.4 Simulation Studies; 3.5 Summary; References; Chapter 4: Learning and Optimization in Hierarchical Adaptive Critic Design; 4.1 Introduction; 4.2 Hierarchical ADP Architecture with Multiple-Goal Representation; 4.3 Case Study: The Ball-and-Beam System; 4.4 Conclusions and Future Work; Acknowledgments; References
- Chapter 5: Single Network Adaptive Critics Networks-Development, Analysis, and Applications5.1 Introduction; 5.2 Approximate Dynamic Programing; 5.3 SNAC; 5.4 J-SNAC; 5.5 Finite-SNAC; 5.6 Conclusions; Acknowledgments; References; Chapter 6: Linearly Solvable Optimal Control; 6.1 Introduction; 6.2 Linearly Solvable Optimal Control Problems; 6.3 Extension to Risk-Sensitive Control and Game Theory; 6.4 Properties and Algorithms; 6.5 Conclusions and Future Work; References; Chapter 7: Approximating Optimal Control with Value Gradient Learning; 7.1 Introduction
- 7.2 Value Gradient Learning and BPTT Algorithms7.3 A Convergence Proof for VGL(1) for Control with Function Approximation; 7.4 Vertical Lander Experiment; 7.5 Conclusions; References; Chapter 8: A Constrained Backpropagation Approach to Function Approximation and Approximate Dynamic Programming; 8.1 Background; 8.2 Constrained Backpropagation (CPROP) Approach; 8.3 Solution of Partial Differential Equations in Nonstationary Environments; 8.4 Preserving Prior Knowledge in Exploratory Adaptive Critic Designs; 8.5 Summary; Algebraic ANN Control Matrices; References
- Chapter 9: Toward Design of Nonlinear ADP Learning Controllers with Performance Assurance9.1 Introduction; 9.2 Direct Heuristic Dynamic Programming; 9.3 A Control Theoretic View on the Direct HDP; 9.4 Direct HDP Design with Improved Performance Case 1-Design Guided by a Priori LQR Information; 9.5 Direct HDP Design with Improved Performance Case 2-Direct HDP for Coorindated Damping Control of Low-Frequency Oscillation; 9.6 Summary; Acknowledgment; References; Chapter 10: Reinforcement Learning Control with Time-Dependent Agent Dynamics; 10.1 Introduction; 10.2 Q-Learning