This monograph explores the analysis and design of model-free optimal control systems based on reinforcement learning (RL) theory, presenting new methods that overcome recent challenges faced by RL. New developments in the design of sensor data efficient RL algorithms are demonstrated that not only reduce the requirement of sensors by means of output feedback, but also ensure optimality and stability guarantees. A variety of practical challenges are considered, including disturbance rejection, control constraints, and communication delays. Ideas from game theory are incorporated to solve output feedback disturbance rejection problems, and the concepts of low gain feedback control are employed to develop RL controllers that achieve global stability under control constraints.
Output Feedback Reinforcement Learning Control for Linear Systems will be a valuable reference for graduate students, control theorists working on optimal control systems, engineers, and applied mathematicians.
Mục lục
Preface.- Introduction to Optimal Control and Reinforcement Learning.- Model-Free Design of Linear Quadratic Regulator.- Model-Free H-infinity Disturbance Rejection and Linear Quadratic Zero-Sum Games.- Model-Free Stabilization in the Presence of Actuator Saturation.- Model-Free Control of Time Delay Systems.- Model-Free Optimal Tracking Control and Multi-Agent Synchronization.- Index.