The seminar introduces the technologies and techniques for gestural interaction, discussing both technical and design challenges. We focus on gesture description and their effective exploitation for creating usable interfaces. We analyse strengths and weaknesses of dif- ferent recognition devices, programming models and interface designs. In addition, we provide insights on the open research questions in this field. In particular, we focus on the lack of flexibility in current frameworks that notify only when a gesture is detected completely. This approach suffers from a lack of flexibility, unless the programmer performs explicit temporal analysis of raw sensors data. We describe how compositional and declarative models for gestures definition are trying to solve this problem. Basic traits are used as building blocks for defining gestures; each one notifies the change of a feature value. A complex gesture is defined by the composition of other sub-gestures using a set of operators. The user interface behaviour can be associated to the recognition of the whole gesture or to any other sub-component, addressing the problem of granularity for the notification of events. Such models can be instantiated for different gesture recognition supports such as multi- touch screens and full body gestures devices (e.g. Microsoft Kinect). In addition to the solution for the event granularity problem, we discuss how to separate the definition of the gesture from the user interface behaviour using a compositional approach.
CSS e script comuni siti DOL - frase 9957