DYNAMIC SYSTEMS AND SIMULATION
LABORATORY
Department of Production Engineering & Management
Technical University of Crete
by
MARKOS PAPAGEORGIOU and MAGDALENE MARINAKI
INTERNAL REPORT No: 1995-4
Chania, Greece
November 1995
CONTENTS
1. INTRODUCTION
2.THE THEORY OF DISCRETE-TIME OPTIMAL CONTROL
2.1. Problem Formulation
2.2. Optimality Conditions
2.3. Extensions
3. FEASIBLE DIRECTION ALGORITHM
3.1. Reduced Gradient
3.2. Basic Algorithmic Structure
3.3. Specification of a Search Direction3.3.1. Steepest Descent
3.3.2. Quasi-Newton Methods
3.3.3. Conjugate Gradient Methods3.4. Line Optimization
3.4.1. The One-Dimensional Line Function
3.4.2. Numerical Line Search Algorithm3.5. Convergence Test
3.6. Restart
3.7. Scaling
3.8. Examples
4. EXTENSIONS
4.1. Constant Control Bounds
4.2. State-Dependent Control Bounds
4.3. Linear Control Constraints4.3.1. General Background
4.3.2. Calculation of Feasible Control Variables
4.3.3 Algorithmic Modifications4.4 State-Dependent Linear Control Constraints
4.5 Further Extensions
5. CONCLUSIONS
REFERENCES
APPENDIX A: OPTIMALITY OF EXTENDED ALGORITHM
A.1. Constant Control Bounds
A.2 Linear Control Constraints