Understanding Public Evaluation: Quantifying Experimenter Intervention

Understanding Public Evaluation: Quantifying Experimenter Intervention from Julie Williamson on Vimeo.

We’re very excited that our CHI 2017 paper has been given a Best Paper Award (top 1% of submissions).

Public evaluations are popular because some research questions can only be answered by turning “to the wild.” Different approaches place experimenters in different roles during deployment, which has implications for the kinds of data that can be collected and the potential bias introduced by the experimenter. This paper expands our understanding of how experimenter roles impact public evaluations and provides an empirical basis to consider different evaluation approaches. We completed an evaluation of a playful gesture-controlled display – not to understand interaction at the display but to compare different evaluation approaches. The conditions placed the experimenter in three roles, steward observer, overt observer, and covert observer, to measure the effect of experimenter presence and analyse the strengths and weaknesses of each approach.

Full text will be available after publication in May.

Pervasive Displays 2014: Analysing Pedestrian Traffic Around Public Displays

The technique visualised pedestrian traffic and can show walking direction, speed, and path curvature.

The technique visualises pedestrian traffic and can show walking direction, speed, and path curvature.

In June 2014, I will present the results of my paper on an evaluation method for evaluating displays in public spaces.  The proposed evaluation technique brings together observational research techniques from sociology with social signal processing to automatically generate behavioural maps of public display usage.  This technique can be used in a variety of contexts to evaluate many different kinds of public displays and is non-intrusive and non-disruptive to the interaction being evaluated.  Another interesting aspect of this approach is that it can capture both interacting users and non-interacting or avoiding passers-by.  Upon publication, all of the data and code used in the paper will be made openly available.

Abstract: This paper presents a powerful approach to evaluating public technologies by capturing and analysing pedestrian traffic using computer vision. This approach is highly flexible and scales better than traditional ethnographic techniques often used to evaluate technology in public spaces. This technique can be used to evaluate a wide variety of public installations and the data collected complements existing approaches. Our technique allows behavioural analysis of both interacting users and non-interacting passers-by. This gives us the tools to understand how technology changes public spaces, how passers-by approach or avoid public technologies, and how different interaction styles work in public spaces. In the paper, we apply this technique to two large public displays and a street performance. The results demonstrate how metrics such as walking speed and proximity can be used for analysis, and how this can be used to capture disruption to pedestrian traffic and passer-by approach patterns.

Download the paper

Mobile Social Signal Processing – Capturing Performative Input

This week, I published an article in LNCS Mobile Social Signal Processing that describes using performative actions as input in mobile settings.  I had never focused on social signal processing in my work until Alessandro came to the University of Glasgow and I realised there was some interesting overlap in multimodal interaction design and social signal processing.

So here is my first article looking at social signal processing for performative interaction.

Capturing Performative Actions for Interaction and Social Awareness

Abstract: Capturing and making use of observable actions and behaviours presents compelling opportunities for allowing end-users to interact with such data and eachother. For example, simple visualisations based on on detected behaviour or context allow users to interpret this data based on their existing knowledge and awarness of social cues. This paper presents one such “remote awareness” application where users can interpret a visualization based on simple behaviours to gain a sense of awareness of other users’ current context or actions. Using a prop embedded with sensors, users could control the visualisation using gesture and voice-based input. The results of this work describe the kinds of performances users generated during the trial, how they imagined the actions of their fellow participants based on the visualisation, and how the props containing sensors were used to support, or in some cases hinder, successful performance and interaction.

ICMI2013 – Mo!Games: evaluating mobile gestures in the wild

One of the apps was a mobile game where users had to toss marshmallows onto a target using gestures.

One of the apps was a mobile game where users had to toss marshmallows onto a target using gestures.

This year I attended ICMI 2013 in Sydney, Australia. I presented our full length paper entitled Mo!Games: evaluating mobile gestures in the wild. The paper describes an in the wild study of a mobile application that uses head, wrist, and device-based gestures. The goal of the study was to explore how users performed gesture-based interaction in their everyday lives and how they developed preferences for different gesture styles.

Abstract: The user experience of performing gesture-based interactions in public spaces is highly dependent on context, where users must decide which gestures they will use and how they will perform them. In order to complete a realistic evaluation of how users make these decisions, the evaluation of such user experiences must be completed “in the wild.” Furthermore, studies need to be completed within different cultural contexts in order to understand how users might adopt gesture differently in different cultures. This paper presents such a study using a mobile gesture-based game, where users in the UK and India interacted with this game over the span of 6 days. The results of this study demonstrate similarities between gesture use in these divergent cultural settings, illustrate factors that influence gesture acceptance such as perceived size of movement and perceived accuracy, and provide insights into the interaction design of mobile gestures when gestures are distributed across the body.

Download the Paper

Work in Progress and Workshop Papers at CHI 2013

This year at CHI I will be presenting a work in progress and attending the “Experiencing Interactivity in Public Places” workshop.

P1010136

My work in progress presentation discusses work I completed on the MultiMemoHome Project with my colleagues Marilyn McGee-Lennon, Euan Freeman, and Stephen Brewster. This paper describes the co-design of a smartpen and paper calendar-based reminder system for the home. The design sessions involved older adults and used experience prototypes. We completed these co-design sessions with older adults in order to explore the possibility of exploiting paper-based calendars for multimodal reminders systems using a smartpen. The initial results demonstrate successful interaction techniques that make a strong link between paper interaction and scheduling reminders, such as using smartpen annotations and using the location of written reminders within a paper diary to schedule digital reminders. The results also describe important physical aspects of paper diaries as discussed by older adults, such as daily/weekly layouts and binding. Full Paper

2012-12-04 17.20.14

My position paper for the EIPS workshop discusses my current work as a SICSA Fellow in Multimodal Interaction at Glasgow University. When interactive systems require users to “perform” in front of others, the experience of interacting dramatically changes. This “performative” dynamic can be purposefully exploited in the design and evaluation of interactive systems to create compelling experiences. In this work, I explore these issues using highly flexible low-resolution displays composed of strips of individually addressable LED lights. These low-resolution displays can take a wide variety of forms and can be deployed in many different settings. I pair these displays with depth sensors to add playful interactivity, whole body interaction, and proxemic interaction. Such a combination of flexible output and depth-based input can be used for a variety of playful and creative interfaces. In this paper, I describe some of the most promising directions made possible using this technology, such as ambient interfaces that create playful reactive experiences, visualize pedestrian traffic, and highlight social dynamics. Full Paper