Gesturally parameterized sound and video synthesis

Speaker:  Sha Xin Wei - Topological Media Lab, Concordia University, Canada
  Tuesday, October 4, 2005 at 5:30 PM caffe`, te` & C. ore 17.00
Current computer hardware permits the real-time synthesis of time-based media such as video and sound textures based on physical models. These models, like variants of the wave equation, the Navier-Stokes equation for turbulent flow, and richer models used in music synthesis, have dozens to hundreds of continuous parameters. Some of these parameters can be provided by functions of sensor data from cameras or other physical sensors: accelerometers, photometers, force sensors and so forth.

This provides a phenomenologically rich responsive medium and experimental apparatus for the study of intentional and non-intentional gesture. One technical problem is how to map gesture or movement to rich temporal media in a ways that are a-linguistically learnable, yet plausibly rich and expressive.

Place
Ca' Vignal - Piramide, Floor 0, Hall Verde

Programme Director
Davide Rocchesso

External reference
Publication date
September 6, 2005

Studying

Share