This new, updated edition reflects major changes that have occurred in the field in recent years and presents, in a clear and direct way, the fundamentals of optimal control theory. It covers the major topics involving measurement, principles of optimality, dynamic programming, variational methods, Kalman filtering, and other solution techniques.
1.1 Optimization without Constraints.
1.2 Optimization with Equality Constraints.
1.3 Numerical Solution Methods.
2. Optimal Control of Discrete-Time Systems.
2.1 Solution of the General Discrete Optimization Problem.
2.2 Discrete-Time Linear Quadratic Regulator.
2.3 Digital Control of Continuous-Time Systems.
2.4 Steady-State Closed-Loop Control and Suboptimal Feedback.
2.5 Frequency-Domain Results.
3. Optimal Control of Continuous-Time Systems.
3.1 The Calculus of Variations.
3.2 Solution of the General Continuous Optimization Problem.
3.3 Continuous-Time Linear Quadratic Regulator.
3.4 Steady-State Closed-Loop Control and Suboptimal Feedback.
3.5 Frequency-Domain Results.
4. The Tracking Problem and Other LQR Extensions.
4.1 The Tracking Problem.
4.2 Regulator with Function of Final State Fixed.
4.3 Second-Order Variations in the Performance Index.
4.4 The Discrete-Time Tracking Problem.
4.5 Discrete Regulator with Function of Final State Fixed.
4.6 Discrete Second-Order Variations in the Performance Index.
5. Final-Time-Free and Constrained Input Control.
5.1 Final-Time-Free Problems.
5.2 Constrained Input Problems.
6. Dynamic Programming.
6.1 Bellman`s Principle of Optimality.
6.2 Discrete-Time Systems.
6.3 Continuous-Time Systems.
7. Optimal Control for Polynomial Systems.
7.1 Discrete Linear Quadratic Regulator.
7.2 Digital Control for Continuous-Time Systems.
8. Output Feedback and Structured Control.
8.1 Linear Quadratic Regulator with Output Feedback.
8.2 Tracking a Reference Input.
8.3 Tracking by Regulator Redesign.
8.4 Command-Generator Tracker.
8.5 Explicit Model-Following Design.
8.6 Output Feedback in Game Theory and Decentralized Control.
9. Robustness and Multivariable Frequency-Domain Techniques.
9.2 Multivariable Frequency-Domain Analysis.
9.3 Robust Output-Feedback Design.
9.4 Observers and the Kalman Filter.
9.5 LQG/Loop-Transfer Recovery.
9.6 H Design.
10. Differential Games.
10.1 Optimal Control Derived Using Pontryagin`s Minimum Principle and Bellman`s Equation.
10.2 Two-Player Zero Sum Games.
10.3 Application of Zero-Sum Games to H Control.
10.4 Multi-Player Non Zero-Sum Games.
11. Reinforcement Learning and Optimal Adaptive Control.
11.1 Reinforcement Learning.
11.2 Markov Decision Processes.
11.3 Policy Evaluation and Policy Improvement.
11.4 Temporal Difference Learning and Optimal Adaptive Control.
11.5 Optimal Adaptive Control for Discrete-Time Systems.
11.6 Integral Reinforcement Learning for Optimal Adaptive Control of Continuous-Time Systems.
11.7 Synchronous Optimal Adaptive Control for Continuous-Time Systems.
Appendix A. Review of Matrix Algebra.