Semi-supervised learning aims to learn classification rules from both labeled and, typically more easily obtainable, unlabeled data. Though studied since the late 60s and early 70s, surprisingly little headway has been made with respect to methods that can guarantee, in expectation, to always outperform their supervised counterparts. A principle problem is that current state-of-the-art semi-supervised learning techniques make additional assumptions about the underlying data in an attempt to exploit all unlabeled instances. These assumptions, however, typically do not hold true and, as a result, making them can considerably deteriorate classification performance.
After giving a brief impression of the day-to-day worries and topics of a pattern recognizer at Delft University of Technology, I will present and discuss some of my rather preliminary ideas and results concerning the problem of semi-supervision. My basic proposal is to develop semi-supervised learning techniques that do not make assumptions beyond those implicitly or explicitly made by the classification scheme employed. The overarching idea to achieve this is to exploit constraints and prior knowledge intrinsic to the classifiers considered. A [very, very, very] simple example using the nearest mean classifier is provided.
Strada le Grazie 15
VAT number 01541040232
Italian Fiscal Code 93009870234
© 2020 | Verona University | Credits