This handbook presents state-of-the-art research in reinforcement learning, focusing on its applications in the control and game theory of dynamic systems and future directions for related research and technology.
The contributions gathered in this book deal with challenges faced when using learning and adaptation methods to solve academic and industrial problems, such as optimization in dynamic environments with single and multiple agents, convergence and performance analysis, and online implementation. They explore means by which these difficulties can be solved, and cover a wide range of related topics including:
- deep learning;
- artificial intelligence;
- applications of game theory;
- mixed modality learning; and
- multi-agent reinforcement learning.
Practicing engineers and scholars in the field of machine learning, game theory, and autonomous control will find the
Handbook of Reinforcement Learning and Control to be thought-provoking, instructive and informative.
สารบัญ
The Cognitive Dialogue: A New Architecture for Perception and Cognition.- Rooftop-Aware Emergency Landing Planning for Small Unmanned Aircraft Systems.- Quantum Reinforcement Learning in Changing Environment.- The Role of Thermodynamics in the Future Research Directions in Control and Learning.- Mixed Density Reinforcement Learning Methods for Approximate Dynamic Programming.- Analyzing and Mitigating Link-Flooding Do S Attacks Using Stackelberg Games and Adaptive Learning.- Learning and Decision Making for Complex Systems Subjected to Uncertainties: A Stochastic Distribution Control Approach.- Optimal Adaptive Control of Partially Unknown Linear Continuous-time Systems with Input and State Delay.- Gradient Methods Solve the Linear Quadratic Regulator Problem Exponentially Fast.- Architectures, Data Representations and Learning Algorithms: New Directions at the Confluence of Control and Learning.- Reinforcement Learning for Optimal Feedback Control and Multiplayer Games.- Fundamental Principles of Design for Reinforcement Learning Algorithms Course Titles.- Long-Term Impacts of Fair Machine Learning.- Learning-based Model Reduction for Partial Differential Equations with Applications to Thermo-Fluid Models’ Identification, State Estimation, and Stabilization.- CESMA: Centralized Expert Supervises Multi-Agents, for Decentralization.- A Unified Framework for Reinforcement Learning and Sequential Decision Analytics.- Trading Utility and Uncertainty: Applying the Value of Information to Resolve the Exploration-Exploitation Dilemma in Reinforcement Learning.- Multi-Agent Reinforcement Learning: Recent Advances, Challenges, and Applications.- Reinforcement Learning Applications, An Industrial Perspective.- A Hybrid Dynamical Systems Perspective of Reinforcement Learning.- Bounded Rationality and Computability Issues in Learning, Perception, Decision-Making, and Games Panagiotis Tsiotras.- Mixed Modality Learning.- Computational Intelligence in Uncertainty Quantification for Learning Control and Games.- Reinforcement Learning Based Optimal Stabilization of Unknown Time Delay Systems Using State and Output Feedback.- Robust Autonomous Driving with Humans in the Loop.- Boundedly Rational Reinforcement Learning for Secure Control.
เกี่ยวกับผู้แต่ง
Kyriakos G. Vamvoudakis serves as an Assistant Professor at The Daniel Guggenheim School of Aerospace Engineering at Georgia Tech. He received the Diploma in Electronic and Computer Engineering from the Technical University of Crete, Greece in 2006. He received his M.S. and Ph.D. in Electrical Engineering in 2008 and 2011 respectively from the University of Texas, Arlington. During the period from 2012 to 2016 he was a project research scientist at the Center for Control, Dynamical Systems and Computation at the University of California, Santa Barbara. He was an assistant professor at the Kevin T. Crofton Department of Aerospace and Ocean Engineering at Virginia Tech until 2018. His research interests include reinforcement learning, control theory, and safe/assured autonomy. He is the recipient of a 2019 ARO YIP award, a 2018 NSF CAREER award, and of several international awards including the 2016 International Neural Network Society Young Investigator Award. He currently isan Associate Editor of: Automatica; IEEE Computational Intelligence Magazine; IEEE Transactions on Systems, Man, and Cybernetics: Systems; Neurocomputing; Journal of Optimization Theory and Applications; and of IEEE Control Systems Letters.
Yan Wan is currently an Associate Professor in the Electrical Engineering Department at the University of Texas at Arlington. She received her Ph.D. degree in Electrical Engineering from Washington State University in 2009 and then did postdoctoral training at the University of California, Santa Barbara. Her research interests lie in the modeling, evaluation, and control of large-scale dynamical networks, cyber-physical system and stochastic networks. She has been recognized by several prestigious awards, including the NSF CAREER Award, RTCA William E. Jackson Award and U.S. Ignite and GENI demonstration awards. She currently serves as the Associate Editor for IEEE Transactions on Control of Network Systems, Transactions of the Institute of Measurement and Control, and Journal of Advanced Control for Applications.
Frank L. Lewis is a Distinguished Scholar Professor and Moncrief-O’Donnell Chair at University of Texas at Arlington’s Automation & Robotics Research Institute. He obtained his Bachelor’s Degree in Physics/EE and MSEE at Rice University, his MS in Aeronautical Engineering from Univ. W. Florida, and his Ph.D. at Ga. Tech. He received the Fulbright Research Award, the Outstanding Service Award from Dallas IEEE Section, and was selected as Engineer of the year by Ft. Worth IEEE Section. He is an elected Guest Consulting Professor at South China University of Technology and Shanghai Jiao Tong University. He is a Fellow of the IEEE, Fellow of IFAC, Fellow of the U.K. Institute of Measurement & Control, and a U.K. Chartered Engineer. His current research interests include distributed control on graphs, neural and fuzzy systems, and intelligent control.
Derya Cansever is a Program Manager at the US Army Research Office. Prior to that, he was the Chief Engineer of the Communication Networks and Networking Division at US Army CERDEC, where he conducts research in Tactical, Mission Aware and Software Defined Networks. Dr. Cansever has also worked at Johns Hopkins University Applied Physics Laboratory, AT&T Bell Labs, and GTE Laboratory. He taught courses on Data Communications and Network Security at Boston University and University of Massachusetts. He has a Ph.D. in Electrical and Computer Engineering from the University of Illinois at Urbana Champaign.