The course aims to provide: i) methodological principles underlying the classification; ii) feature selection and extraction techniques; iii) algorithms for supervised and unsupervised learning; parametric and non-parametric parameter estimation; iv) cross-validation techniques for the validation of classifiers.
At the end of the course the student should be able to understand if a classification problem can be solved with some existing technology and, in that case, the type of machine learning algorithm that has to be used for the training.
Furthermore, the student must demonstrate: i) to understand what kind of characteristics or patterns should be extracted from the raw data coming from a sensor; ii) to understand what kind of classifier should be used in relation with the encountered problem: iii) to understand the complexity of the recognition problem in computational terms; iv) to produce software that recognizes real data; v) be able to use other people's code and modify it adapting it to the problem under examination.
This knowledge will allow the student to understand: i) that fit measures guarantee an effective classifier after the phase of his training; ii) what are the techniques for validating the results of a classifier.
At the end of the course the student will be able to understand a machine learning or pattern recognition paper.
The course program can be divided into two parts, the methodological and the applicative one, which will go hand in hand with the lessons.
--- Introduction: classification systems, types of classification, applications
>> SUPERVISED LEARNING <<
--- Bayes decision theory, risk minimization
--- linear, non-linear classifiers and discriminant functions
--- Selection and extraction of features, Principal Component Analysis, Fisher Linear Discriminant Analysis
--- Parameter Estimation: Maximum Maximum Likelihood, MAP, Bayesian
--- Single Gaussian estimation and mixture of Gaussians: Expectation-Maximization algorithm and variational approximations (Mean Field)
--- Non-parametric methods for training a classifier: Parzen Windows and K-Nearest-Neighbor
--- Monte Carlo methods for dynamic density estimation, Particle Filtering
--- Markov models and Hidden Markov Models
>> NON-SUPERVISED CLASSIFICATION <<
--- Partitional methods (k-means and x-means), hierarchical (single-double linkage)
--- Internal and external validation criteria
-- Binary and multiclass classification on real benchmarks
-- Face recognition
- Richard O. Duda, Peter E. Hart, and David G. Stork. 2000. Pattern Classification (2nd Edition). Wiley-Interscience.
- Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA.
|R. Duda, P. Hart, D. Stork||Pattern Classification||Wiley||2001|
|C.M. Bishop||Pattern Recognition and Machine Learning||Springer||2006|
Examination method is oral; the required content will be those seen during the lessons, as indicated by the course program. In particular, when necessary, a formal demonstration of a procedure will be requested. In all cases, the questions will address a classification problem where the student will have to suggest the most suitable technique for the case, formally demonstrating the choice. The final vote will be built depending on the student's proposed solution to the question (20 points total), and the formal accuracy with which the solution is presented (10 points).