Numerical Methods and Optimization
year 2013/2014
The course introduces some of the main techniques and methods for the solution of numerical problems. These methods often require the joint exploitation of the typical techniques of numerical analysis and of optimization algorithms. We show some of the main situations in which optimization methods are applied to solve problems of numerical analysis and some of the main situations in which the techniques of numerical analysis are essential to solve optimization problems. We also show the application of these methods to some specific problems chosen, for instance, in the following areas: regression and parameter estimation in statistics, approximation and data fitting, machine learning, data mining, image and signal reconstruction, economic equilibria and finance.
The program below is a preliminary version, which will be periodically updated according to the actual lectures.
COURSE PROGRAM
(A more detailed program is available through
the record of lectures)
- Topology and calculus background
- Sets in the Euclidean space: openness, closedness, boundedneess and compactness
- Sequences and limits in the Euclidean space
- Functions of several variables: continuity, directional derivatives and differentiability
- Functions of several variables: first and second order Taylor's formulas
- Linear algebra background
- Normal, unitary, hermitian, positive definite, reducible matrices
- Minimal polynomial, diagonalizability, Jordan and Schur canonical forms
- Quadratical forms
- Gerschgorin theorem for irreducible matrices
- Matrix norms induced by vector norms
- Convex functions, convex sets and optimization problems
- Convex and strictly convex functions, convex sets: properties and characterizations
- Classification of optimization problems; local and global minima [and maxima]
- Properties of convex optimization problems
- Optimality conditions for unconstrained optimization
- First and second order necessary optimality conditions
- Sufficient optimality conditions; the convex case
- Direct and iterative methods for linear systems
- Matrix factorizations and elementary matrices
- LU, QR and LL^H factorizations; Gauss, Householder and Cholesky methods
- Convergence of iterative methods
- Stein-Rosenberg theorem
- Special results for tridiagonal and positive definite matrices
- The conjugate gradient method for linear systems
- Iterative methods for nonlinear systems
- General notions and conditions for convergence
- Newton-Raphson method and its convergence; the case of convex functions
- Solution methods for unconstrained optimization
- Gradients methods with exact and inexact linesearch
- Newton's method
- Conjugate gradient methods
- Free derivatives methods: compass search techniques
- The least-squares problem
- The linear problem and the normal equations
- Solving the linear problem through QR factorization and SVD decomposition
- Solving the linear problem through the conjugate gradient method
- The nonlinear problem and the Gauss-Newton method
- Iterative methods for computing eigenvalues
- Bauer-Fike theorem
- The power method
- Unitary reduction to tridiagonal form
- Tridiagonal matrices: Sturm sequences; divide and conquer technique
- QR method for eigenvalues and its convergence
- Optimality conditions for constrained optimization
- Necessary optimality conditions for abstract constrained problems
- Fritz John and Karush-Kuhn-Tucker necessary optimality conditions: Lagrange multipliers
- Necessary and sufficient condition for convex optimization
- Solution methods for constrained optimization
- Methods of feasible directions; the conditional gradient method (aka the Frank-Wolfe method)
- Projections over a convex set and projected gradient methods
- Penalty methods: external penalization and the augmented Lagrangian
- Barrier methods; interior point primal-dual methods for linear programming
- The fast Fourier transform
- The Fourier matrix and its properties
- The algorithm and some applications