Approximate dynamic programming solves decision and control problems While advances in science and engineering have enabled us to design and build complex systems, how to control and optimize them remains a challenge. This was made clear, for example, by the major power outage across dozens of cities in the Eastern United States and Canada in August of 2003. Learning and approximate dynamic programming (ADP) is emerging as one of the most promising mathematical and computational approaches to solve nonlinear, large-scale, dynamic control problems under uncertainty. It draws heavily both on rigorous mathematics and on biological inspiration and parallels, and helps unify new developments across many disciplines. The foundations of learning and approximate dynamic programming have evolved from several fields-optimal control, artificial intelligence (reinforcement learning), operations research (dynamic programming), and stochastic approximation methods (neural networks). Applications of these methods span engineering, economics, business, and computer science. In this volume, leading experts in the field summarize the latest research in areas including: Reinforcement learning and its relationship to supervised learning Model-based adaptive critic designs Direct neural dynamic programming Hierarchical decision-making Multistage stochastic linear programming for resource allocation problems Concurrency, multiagency, and partial observability Backpropagation through time and derivative adaptive critics Applications of approximate dynamic programming and reinforcement learning in control-constrained agile missiles; power systems; heating, ventilation, and air conditioning; helicopter flight control; transportation and more.Si, Jennie is the author of 'Handbook Of Learning And Approximate Dynamic Programming', published 2004 under ISBN 9780471660545 and ISBN 047166054X.