Mobile Social Signal Processing – Capturing Performative Input

This week, I published an article in LNCS Mobile Social Signal Processing that describes using performative actions as input in mobile settings.  I had never focused on social signal processing in my work until Alessandro came to the University of Glasgow and I realised there was some interesting overlap in multimodal interaction design and social signal processing.

So here is my first article looking at social signal processing for performative interaction.

Capturing Performative Actions for Interaction and Social Awareness

Abstract: Capturing and making use of observable actions and behaviours presents compelling opportunities for allowing end-users to interact with such data and eachother. For example, simple visualisations based on on detected behaviour or context allow users to interpret this data based on their existing knowledge and awarness of social cues. This paper presents one such “remote awareness” application where users can interpret a visualization based on simple behaviours to gain a sense of awareness of other users’ current context or actions. Using a prop embedded with sensors, users could control the visualisation using gesture and voice-based input. The results of this work describe the kinds of performances users generated during the trial, how they imagined the actions of their fellow participants based on the visualisation, and how the props containing sensors were used to support, or in some cases hinder, successful performance and interaction.