Performative Interaction @ CHI 2019

The research group has social acceptability, group experiences, and virtual reality on the mind at ACM CHI 2019.

Along with colleagues from Ulm University, LMU Munich, Universität Hamburg, and NYU, we organised a workshop on the challenges for using immersive headsets in public and social settings.  As part of the workshop, we set out in Glasgow to get some first hand experience.

Workshop participants try Oculus Go in a busy restaurant.

Workshop participants try Oculus Go in a busy restaurant.

You can access our workshop abstract here:

ACM DL Author-ize serviceChallenges Using Head-Mounted Displays in Shared and Social Spaces

Jan Gugenheimer, Christian Mai, Mark McGill, Julie Williamson, Frank Steinicke, Ken Perlin
CHI EA ’19 Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 2019

We also presented our work on the social acceptance using virtual reality headsets while travelling and presented new techniques for improving user comfort and acceptance of these devices using mixed reality techniques.

VRComposite

You can access the full text here:

ACM DL Author-ize servicePlaneVR: Social Acceptability of Virtual Reality for Aeroplane Passengers

Julie R. Williamson, Mark McGill, Khari Outram
CHI ’19 Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019

FET Open Levitate

Scientists at the University of Glasgow, UK, have managed to suspend little polystyrene particles in mid-air, supported only by ultrasonic acoustic waves. This is levitation. The technology may lead to new kinds of displays to command machines and hence revolutionise human-machine interactions. The study runs under the Levitate project, supported by the research programme on Future and Emerging Technologies of the European Commission.

Public and Performative Interaction @ CHI 2018

We are presenting two papers at ACM CHI 2018.  The links below will provide an open access copy of these papers.

Object Manipulation in Virtual Reality Under Increasing Levels of Translational Gain

Room-scale Virtual Reality (VR) has become an affordable consumer reality, with applications ranging from entertainment to productivity. However, the limited physical space available for room-scale VR in the typical home or office environment poses a significant problem. To solve this, physical spaces can be extended by amplifying the mapping of physical to virtual movement (translational gain). Although amplified movement has been used since the earliest days of VR, little is known about how it influences reach-based interactions with virtual objects, now a standard feature of consumer VR. Consequently, this paper explores the picking and placing of virtual objects in VR for the first time, with translational gains of between 1x (a one-to-one mapping of a 3.5m*3.5m virtual space to the same sized physical space) and 3x (10.5m*10.5m virtual mapped to 3.5m*3.5m physical). Results show that reaching accuracy is maintained for up to 2x gain, however going beyond this diminishes accuracy and increases simulator sickness and perceived workload. We suggest gain levels of 1.5x to 1.75x can be utilized without compromising the usability of a VR task, significantly expanding the bounds of interactive room-scale VR.

Acoustic levitation enables a radical new type of human-computer interface composed of small levitating objects. For the first time, we investigate the selection of such objects, an important part of interaction with a levitating object display. We present Point-and-Shake, a mid-air pointing interaction for selecting levitating objects, with feedback given through object movement. We describe the implementation of this technique and present two user studies that evaluate it. The first study found that users could accurately (96%) and quickly (4.1s) select objects by pointing at them. The second study found that users were able to accurately (95%) and quickly (3s) select occluded objects. These results show that Point-and-Shake is an effective way of initiating interaction with levitating object displays.

ACM DL Author-ize servicePoint-and-Shake: Selecting from Levitating Object Displays

Euan Freeman, Julie Williamson, Sriram Subramanian, Stephen Brewster
CHI ’18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018

Understanding Public Evaluation: Quantifying Experimenter Intervention

Understanding Public Evaluation: Quantifying Experimenter Intervention from Julie Williamson on Vimeo.

We’re very excited that our CHI 2017 paper has been given a Best Paper Award (top 1% of submissions).

Public evaluations are popular because some research questions can only be answered by turning “to the wild.” Different approaches place experimenters in different roles during deployment, which has implications for the kinds of data that can be collected and the potential bias introduced by the experimenter. This paper expands our understanding of how experimenter roles impact public evaluations and provides an empirical basis to consider different evaluation approaches. We completed an evaluation of a playful gesture-controlled display – not to understand interaction at the display but to compare different evaluation approaches. The conditions placed the experimenter in three roles, steward observer, overt observer, and covert observer, to measure the effect of experimenter presence and analyse the strengths and weaknesses of each approach.

Full text will be available after publication in May.

Pervasive Displays 2014: Analysing Pedestrian Traffic Around Public Displays

The technique visualised pedestrian traffic and can show walking direction, speed, and path curvature.

The technique visualises pedestrian traffic and can show walking direction, speed, and path curvature.

In June 2014, I will present the results of my paper on an evaluation method for evaluating displays in public spaces.  The proposed evaluation technique brings together observational research techniques from sociology with social signal processing to automatically generate behavioural maps of public display usage.  This technique can be used in a variety of contexts to evaluate many different kinds of public displays and is non-intrusive and non-disruptive to the interaction being evaluated.  Another interesting aspect of this approach is that it can capture both interacting users and non-interacting or avoiding passers-by.  Upon publication, all of the data and code used in the paper will be made openly available.

Abstract: This paper presents a powerful approach to evaluating public technologies by capturing and analysing pedestrian traffic using computer vision. This approach is highly flexible and scales better than traditional ethnographic techniques often used to evaluate technology in public spaces. This technique can be used to evaluate a wide variety of public installations and the data collected complements existing approaches. Our technique allows behavioural analysis of both interacting users and non-interacting passers-by. This gives us the tools to understand how technology changes public spaces, how passers-by approach or avoid public technologies, and how different interaction styles work in public spaces. In the paper, we apply this technique to two large public displays and a street performance. The results demonstrate how metrics such as walking speed and proximity can be used for analysis, and how this can be used to capture disruption to pedestrian traffic and passer-by approach patterns.

Download the paper

Mobile Social Signal Processing – Capturing Performative Input

This week, I published an article in LNCS Mobile Social Signal Processing that describes using performative actions as input in mobile settings.  I had never focused on social signal processing in my work until Alessandro came to the University of Glasgow and I realised there was some interesting overlap in multimodal interaction design and social signal processing.

So here is my first article looking at social signal processing for performative interaction.

Capturing Performative Actions for Interaction and Social Awareness

Abstract: Capturing and making use of observable actions and behaviours presents compelling opportunities for allowing end-users to interact with such data and eachother. For example, simple visualisations based on on detected behaviour or context allow users to interpret this data based on their existing knowledge and awarness of social cues. This paper presents one such “remote awareness” application where users can interpret a visualization based on simple behaviours to gain a sense of awareness of other users’ current context or actions. Using a prop embedded with sensors, users could control the visualisation using gesture and voice-based input. The results of this work describe the kinds of performances users generated during the trial, how they imagined the actions of their fellow participants based on the visualisation, and how the props containing sensors were used to support, or in some cases hinder, successful performance and interaction.

ICMI2013 – Mo!Games: evaluating mobile gestures in the wild

One of the apps was a mobile game where users had to toss marshmallows onto a target using gestures.

One of the apps was a mobile game where users had to toss marshmallows onto a target using gestures.

This year I attended ICMI 2013 in Sydney, Australia. I presented our full length paper entitled Mo!Games: evaluating mobile gestures in the wild. The paper describes an in the wild study of a mobile application that uses head, wrist, and device-based gestures. The goal of the study was to explore how users performed gesture-based interaction in their everyday lives and how they developed preferences for different gesture styles.

Abstract: The user experience of performing gesture-based interactions in public spaces is highly dependent on context, where users must decide which gestures they will use and how they will perform them. In order to complete a realistic evaluation of how users make these decisions, the evaluation of such user experiences must be completed “in the wild.” Furthermore, studies need to be completed within different cultural contexts in order to understand how users might adopt gesture differently in different cultures. This paper presents such a study using a mobile gesture-based game, where users in the UK and India interacted with this game over the span of 6 days. The results of this study demonstrate similarities between gesture use in these divergent cultural settings, illustrate factors that influence gesture acceptance such as perceived size of movement and perceived accuracy, and provide insights into the interaction design of mobile gestures when gestures are distributed across the body.

Download the Paper

Work in Progress and Workshop Papers at CHI 2013

This year at CHI I will be presenting a work in progress and attending the “Experiencing Interactivity in Public Places” workshop.

P1010136

My work in progress presentation discusses work I completed on the MultiMemoHome Project with my colleagues Marilyn McGee-Lennon, Euan Freeman, and Stephen Brewster. This paper describes the co-design of a smartpen and paper calendar-based reminder system for the home. The design sessions involved older adults and used experience prototypes. We completed these co-design sessions with older adults in order to explore the possibility of exploiting paper-based calendars for multimodal reminders systems using a smartpen. The initial results demonstrate successful interaction techniques that make a strong link between paper interaction and scheduling reminders, such as using smartpen annotations and using the location of written reminders within a paper diary to schedule digital reminders. The results also describe important physical aspects of paper diaries as discussed by older adults, such as daily/weekly layouts and binding. Full Paper

2012-12-04 17.20.14

My position paper for the EIPS workshop discusses my current work as a SICSA Fellow in Multimodal Interaction at Glasgow University. When interactive systems require users to “perform” in front of others, the experience of interacting dramatically changes. This “performative” dynamic can be purposefully exploited in the design and evaluation of interactive systems to create compelling experiences. In this work, I explore these issues using highly flexible low-resolution displays composed of strips of individually addressable LED lights. These low-resolution displays can take a wide variety of forms and can be deployed in many different settings. I pair these displays with depth sensors to add playful interactivity, whole body interaction, and proxemic interaction. Such a combination of flexible output and depth-based input can be used for a variety of playful and creative interfaces. In this paper, I describe some of the most promising directions made possible using this technology, such as ambient interfaces that create playful reactive experiences, visualize pedestrian traffic, and highlight social dynamics. Full Paper