Kernel methods are a particularly useful framework for combining heterogeneous data modalities for machine learning purposes, and are now a well-established methodology in the context of maximum margin classification. They have recently acquired new saliency in the context of deep learning theory, where kernel based approaches have provided meaningful generalisation bounds and convex learning approximations.
However, missing inter-modal information presents intrinsic conceptual difficulties for multi-kernel machine learning as a consequence of the fact that training objects and kernels must jointly define the composite embedding space in which classification occurs. Such situations arise when data fields are incomplete or else when training sets for differing modalities are disjoint (e.g in proprietary data sets for multimodal biometrics).
This talk will set out a pair of strategies for addressing this problem, the neutral-point method and the tomographic kernel fusion method, which sit at opposite ends the spectrum of possibilities in relation to the extent of inference of intermodal information. We benchmark against the augmented kernel method, an order-insensitive approach derived from the direct sum of constituent kernel matrices, and also against additive kernel combination where the correspondence information is given a priori. Both proposed methods give rise to substantial performance improvements and also present interesting resonances with ensemble learning theory.
Contact Person: A. Di Pierro
CSS e script comuni siti DOL - frase 9957