Multimodal interfaces, since they provide the user with multiple modes of interaction with a system, require to combine signals acquired from different kinds of sensors. The talk will focus on an interaction paradigm combining movement and sound in a multimodal environment. This paradigm is based on the extension of the gesture concept to sounds and is validated by several experiments that investigate the communication capabilities of sound gestures. The use of multivariate analysis and pattern recognition methods will be also discussed, for defining a semantic space that allows to represent qualities of sound and gestures at a higher level of abstraction.
Strada le Grazie 15
37134 Verona
Partita IVA01541040232
Codice Fiscale93009870234
© 2024 | Università degli studi di Verona
******** CSS e script comuni siti DOL - frase 9957 ********p>