A comprehensive exploration of the control schemes of human-robot interactions
In Human-Robot Interaction Control Using Reinforcement Learning, an expert team of authors delivers a concise overview of human-robot interaction control schemes and insightful presentations of novel, model-free and reinforcement learning controllers. The book begins with a brief introduction to state-of-the-art human-robot interaction control and reinforcement learning before moving on to describe the typical environment model. The authors also describe some of the most famous identification techniques for parameter estimation.
Human-Robot Interaction Control Using Reinforcement Learning offers rigorous mathematical treatments and demonstrations that facilitate the understanding of control schemes and algorithms. It also describes stability and convergence analysis of human-robot interaction control and reinforcement learning based control.
The authors also discuss advanced and cutting-edge topics, like inverse and velocity kinematics solutions, H2 neural control, and likely upcoming developments in the field of robotics.
Readers will also enjoy:
* A thorough introduction to model-based human-robot interaction control
* Comprehensive explorations of model-free human-robot interaction control and human-in-the-loop control using Euler angles
* Practical discussions of reinforcement learning for robot position and force control, as well as continuous time reinforcement learning for robot force control
* In-depth examinations of robot control in worst-case uncertainty using reinforcement learning and the control of redundant robots using multi-agent reinforcement learning
Perfect for senior undergraduate and graduate students, academic researchers, and industrial practitioners studying and working in the fields of robotics, learning control systems, neural networks, and computational intelligence, Human-Robot Interaction Control Using Reinforcement Learning is also an indispensable resource for students and professionals studying reinforcement learning.
İçerik tablosu
Author Biographies xi
List of Figures xiii
List of Tables xvii
Preface xix
Part I Human-robot Interaction Control 1
1 Introduction 3
1.1 Human-Robot Interaction Control 3
1.2 Reinforcement Learning for Control 6
1.3 Structure of the Book 7
References 10
2 Environment Model of Human-Robot Interaction 17
2.1 Impedance and Admittance 17
2.2 Impedance Model for Human-Robot Interaction 21
2.3 Identification of Human-Robot Interaction Model 24
2.4 Conclusions 30
References 30
3 Model Based Human-Robot Interaction Control 33
3.1 Task Space Impedance/Admittance Control 33
3.2 Joint Space Impedance Control 36
3.3 Accuracy and Robustness 37
3.4 Simulations 39
3.5 Conclusions 42
References 44
4 Model Free Human-Robot Interaction Control 45
4.1 Task-Space Control Using Joint-Space Dynamics 45
4.2 Task-Space Control Using Task-Space Dynamics 52
4.3 Joint Space Control 53
4.4 Simulations 54
4.5 Experiments 55
4.6 Conclusions 68
References 71
5 Human-in-the-loop Control Using Euler Angles 73
5.1 Introduction 73
5.2 Joint-Space Control 74
5.3 Task-Space Control 79
5.4 Experiments 83
5.5 Conclusions 92
References 94
Part II Reinforcement Learning for Robot Interaction Control 97
6 Reinforcement Learning for Robot Position/Force Control 99
6.1 Introduction 99
6.2 Position/Force Control Using an Impedance Model 100
6.3 Reinforcement Learning Based Position/Force Control 103
6.4 Simulations and Experiments 110
6.5 Conclusions 117
References 117
7 Continuous-Time Reinforcement Learning for Force Control 119
7.1 Introduction 119
7.2 K-means Clustering for Reinforcement Learning 120
7.3 Position/Force Control Using Reinforcement Learning 124
7.4 Experiments 130
7.5 Conclusions 136
References 136
8 Robot Control in Worst-Case Uncertainty Using Reinforcement Learning 139
8.1 Introduction 139
8.2 Robust Control Using Discrete-Time Reinforcement Learning 141
8.3 Double Q-Learning with k-Nearest Neighbors 144
8.4 Robust Control Using Continuous-Time Reinforcement Learning 150
8.5 Simulations and Experiments: Discrete-Time Case 154
8.6 Simulations and Experiments: Continuous-Time Case 161
8.7 Conclusions 170
References 170
9 Redundant Robots Control Using Multi-Agent Reinforcement Learning 173
9.1 Introduction 173
9.2 Redundant Robot Control 175
9.3 Multi-Agent Reinforcement Learning for Redundant Robot Control 179
9.4 Simulations and experiments 183
9.5 Conclusions 187
References 189
10 Robot H2 Neural Control Using Reinforcement Learning 193
10.1 Introduction 193
10.2 H2 Neural Control Using Discrete-Time Reinforcement Learning 194
10.3 H2 Neural Control in Continuous Time 207
10.4 Examples 219
10.5 Conclusion 229
References 229
11 Conclusions 233
A Robot Kinematics and Dynamics 235
A.1 Kinematics 235
A.2 Dynamics 237
A.3 Examples 240
References 246
B Reinforcement Learning for Control 247
B.1 Markov decision processes 247
B.2 Value functions 248
B.3 Iterations 250
B.4 TD learning 251
Reference 258
Index 259
Yazar hakkında
WEN YU, Ph D, is Professor and Head of the Departamento de Control Automático with the Centro de Investigación y de Estudios Avanzados, Instituto Politécnico Nacional (CINVESTAV-IPN), Mexico City, Mexico. He is a co-author of Modeling and Control of Uncertain Nonlinear Systems with Fuzzy Equations and Z-Number.
ADOLFO PERRUSQUÍA, Ph D, is a Research Fellow in the School of Aerospace, Transport, and Manufacturing at Cranfield University in Bedford, UK.