Project Description

Learning objectives

The aim of the course is the introduction to the bound and unbound optimization in particular to the applications concerning the training of neural networks, SVM (Support Vectro Machine) and the definition of clustering techniques. For this purpose, two softwares will be introduced: AMPL and WEKA. AMPL describes and solves high-complexity problems for large-scale mathematical computation. WEKA is a collection of algorithms for Automatic Knowledge analysis to solve classification and regression problems.

Course content

9 CFU-program:
Introduction to the optimization: model approach; Optimization problems: classification; Mathematical computation problems: conditions for the existence of the solution (chapter 1 of the textbook except paragraph 1.5.3, 1.5.5 and 1.5.6); Unbound optimization: optimization conditions of solution algorithms: global merging conditions, merging methods with unbound unidimensional researches, unidimensional research. Gradient method. Newton method. Decomposition methods: introduction, sequential and parallel algorithms. Gauss Seidel method, Gauss Soutwell method, block descent method, brief introduction to the decomposition with block overlapping. Jacobi method. Unbound optimization: neural network training. Proof of perceptron algorithm convergence. Bound optimization: optimization conditions and solution algorithms. Optimization analytical conditions: Fritz John conditions, constraint qualification (linear independence, gradient, equality and active constraints only). Wolfe duality: general case and mean-squared error. Linear and non-linear SVM. Optimization conditions and SVM_light.

6 CFU-program:
Introduction to optimization: mathematical modeling approach. Classification of mathematical programming problems. Existence of solutions.
Unconstrained optimization: optimality conditions, solution algorithms: global convergence conditions. Linesearch. Gradient method, Newton method. Decomposition methods: Gauss Seidel method, Gauss Southwell method, block descent method, Jacobi method.
An application:perceptron, multilayer neural networks, RBF networks. Training of neural networks via unconstrained optimization.
Constrained optimization: optimality conditions and solution algorithms.
Wolfe duality: special case of quadratic optimization. Linear and non linear Support Vector Machines (SVM). Optimality conditions and training of SVMs: SVMlight.

0 credits
60 hours
0 year
Master Degree
0 semester