My colleagues, Radu-Daniel Vatavu and Quincy Brown, and I, have combined our efforts on exploring touch interaction for children on a paper which has been accepted to the INTERACT 2015 conference! The paper, titled “Child or Adult? Inferring Smartphone Users’ Age Group from Touch Measurements Alone,” showed the results of our experiments to classify whether a user is a young child (ages 3 to 6) or an adult from properties of their touch input alone. Radu used his dataset of 3 to 6 year olds and supplemented with our MTAGIC dataset. The abstract is as follows:
We present a technique that classifies users’ age group, i.e., child or adult, from touch coordinates captured on touch-screen devices. Our technique delivered 86.5% accuracy (user-independent) on a dataset of 119 participants (89 children ages 3 to 6) when classifying each touch event one at a time and up to 99% accuracy when using a window of 7+ consecutive touches. Our results establish that it is possible to reliably classify a smartphone user on the fly as a child or an adult with high accuracy using only basic data about their touches, and will inform new, automatically adaptive interfaces for touch-screen devices.
You can download the camera-ready version of the paper here. Radu will be presenting our work at INTERACT, which will be held in Bamberg, Germany, in September. I’ll post the talk when available!
My colleagues, Radu-Daniel Vatavu and Jacob O. Wobbrock, and I have had another paper accepted for publication! This paper continues our efforts to understand patterns and inconsistencies in how people make touchscreen gestures. This time, we introduced a way to use heatmap-style visualizations to examine articulation patterns in gesture datasets, and our paper “Gesture Heatmaps: Understanding Gesture Performance with Colorful Visualizations” was accepted to the ACM International Conference on Multimodal Interaction, to be held in Istanbul, Turkey, in November 2014. Here is the abstract:
We introduce gesture heatmaps, a novel gesture analysis technique that employs color maps to visualize the variation of local features along the gesture path. Beyond current gesture analysis practices that characterize gesture articulations with single-value descriptors, e.g., size, path length, or speed, gesture heatmaps are able to show with colorful visualizations how the value of any such descriptors vary along the gesture path. We evaluate gesture heatmaps on three public datasets comprising 15,840 gesture samples of 70 gesture types from 45 participants, on which we demonstrate heatmaps’ capabilities to (1) explain causes for recognition errors, (2) characterize users’ gesture articulation patterns under various conditions, e.g., finger versus pen gestures, and (3) help understand users’ subjective perceptions of gesture commands, such as why some gestures are perceived easier to execute than others. We also introduce chromatic confusion matrices that employ gesture heatmaps to extend the expressiveness of standard confusion matrices to better understand gesture classification performance. We believe that gesture heatmaps will prove useful to researchers and practitioners doing gesture analysis, and consequently, they will inform the design of better gesture sets and development of more accurate recognizers.
Check out the camera ready version of our paper here. Our paper will be presented as a poster at the conference, and I’ll post the PDF when available.
More work with my University of Maryland collaborators, including assistant professor Leah Findlater, has been accepted for publication! Look for our short paper “Understanding Child-Defined Gestures and Children’s Mental Models for Touchscreen Tabletop Interaction” to appear at the upcoming Interaction Design and Children (IDC) 2014 conference. We extended prior work by Jacob O. Wobbrock and colleagues in a paper from CHI 2009 on eliciting gesture interactions for touchscreen tabletops directly from users themselves; in our case, we asked children to define the gestures, and compared them to similar gestures designed by adults. Here is the abstract:
Creating a pre-defined set of touchscreen gestures that caters to all users and age groups is difficult. To inform the design of intuitive and easy to use gestures specifically for children, we adapted a user-defined gesture study by Wobbrock et al.  that had been designed for adults. We then compared gestures created on an interactive tabletop by 12 children and 14 adults. Our study indicates that previous touchscreen experience strongly influences the gestures created by both groups; that adults and children create similar gestures; and that the adaptations we made allowed us to successfully elicit user-defined gestures from both children and adults. These findings will aid designers in better supporting touchscreen gestures for children, and provide a basis for further user-defined gesture studies with children.
You can see the camera-ready version of the paper here. The conference will be held in Aarhus, Denmark (home of LEGO!). Unfortunately, I won’t be attending, but first-author (and graduating Master’s student) Karen Rust will present the paper at the conference. Look for her in the short paper madness session, and the poster session!
We are pleased to announce that a new paper on the MTAGIC project has been accepted to the International Journal of Child-Computer Interaction! The paper, entitled “Children (and Adults) Benefit From Visual Feedback during Gesture Interaction on Mobile Touchscreen Devices,” is an extension of our IDC 2013 paper on visual feedback and gestural interaction for children and adults. The journal version examines more features and additional recognizers to uncover the effects of presence or absence of visual feedback during gestural interaction. Here is the abstract:
Surface gesture interaction styles used on mobile touchscreen devices often depend on the platform and application. Some applications show a visual trace of gesture input being made by the user, whereas others do not. Little work has been done examining the usability of visual feedback for surface gestures, especially for children. In this paper, we extend our previous work on an empirical study conducted with children, teens, and adults to explore characteristics of gesture interaction with and without visual feedback. We analyze 9 simple and 7 complex gesture features to determine whether differences exist between users of different age groups when completing surface gestures with and without visual feedback. We find that the gestures generated diverge significantly in ways that make them difficult to interpret by some recognizers. For example, users tend to make gestures with fewer strokes in the absence of visual feedback, and tend to make shorter, more compact gestures using straighter lines in the presence of visual feedback. In addition, users prefer to see visual feedback. Based on these findings, we present design recommendations for surface gesture interfaces for children, teens, and adults on mobile touchscreen devices. We recommend providing visual feedback, especially for children, wherever possible.
When this article is officially published, I’ll add a link, but until then, you can check out the preprint version.
The $-family of gesture recognizers project has been working on more ways to characterize patterns in how users make gestures, in the form of new gesture accuracy measures (see our GI 2013 paper for the first set of measures we developed). This new set focuses on relative accuracy, or degree to which two gestures match locally rather than just simply global absolutes. Our paper introducing these measures has been accepted to ICMI 2013! My co-authors are Radu-Daniel Vatavu and Jacob O. Wobbrock. Here is the abstract:
Current measures of stroke gesture articulation lack descriptive power because they only capture absolute characteristics about the gesture as a whole, not fine-grained features that reveal subtleties about the gesture articulation path. We present a set of twelve new relative accuracy measures for stroke gesture articulation that characterize the geometric, kinematic, and articulation accuracy of single and multistroke gestures. To compute the accuracy measures, we introduce the concept of a gesture task axis. We evaluate our measures on five public datasets comprising 38,245 samples from107 participants, about which we make new discoveries; e.g., gestures articulated at fast speed are shorter in path length than slow or medium-speed gestures, but their path lengths vary the most, a finding that helps understand recognition performance. This
work will enable a better understanding of users’ stroke gesture articulation behavior, ultimately leading to better gesture set designs and more accurate recognizers.
I’ll be at ICMI 2013 in Sydney, Australia, in December (summer down under!) to present the paper. Come ask me about the details! In the meantime, check out the camera-ready version of our paper here.
Over the past year on the MTAGIC project, we’ve been investigating differences in how children and adults make gestures and touch targets on mobile touchscreen devices. We have designed our study tasks to reflect the designs of existing apps on the market today, and have recently examined our data to understand the impact of visual feedback on gesture interaction for kids. Our paper on this topic, “Examining the Need for Visual Feedback during Gesture Interaction on Mobile Touchscreen Devices for Kids,” has been accepted to the Interaction Design and Children (IDC) conference! Read the abstract:
Surface gesture interaction styles used on modern mobile touchscreen devices are often dependent on the platform and application. Some applications show a visual trace of gesture input as it is made by the user, whereas others do not. Little work has been done examining the usability of visual feedback for surface gestures, especially for children. In this paper, we present results from an empirical study conducted with children, teens, and adults to explore characteristics of gesture interaction with and without visual feedback. We find that the gestures generated with and without visual feedback by users of different ages diverge significantly in ways that make them difficult to interpret. In addition, users prefer to see visual feedback. Based on these findings, we present several design recommendations for new surface gesture interfaces for children, teens, and adults on mobile touchscreen devices. In general, we recommend providing visual feedback, especially for children, wherever possible.
As usual, you can find the camera-ready version of this paper here. See you in New York City this June!
An upcoming special issue of the Springer Journal of Personal and Ubiquitous Computing (JPUC) on Educational Interfaces, Software, and Technology (EIST) will include an article on the MTAGIC project! This article, entitled “Designing Smarter Touch-Based Interfaces for Educational Contexts,” is an extension of our CHI 2012 EIST workshop paper. We report our foundational studies investigating children’s touch and gesture input patterns, and how they differ from adults, with some discussion of how these findings will impact the design and development of educational apps for touchscreen devices. Here is the abstract:
In next-generation classrooms and educational environments, interactive technologies such as surface computing, natural gesture interfaces, and mobile devices will enable new means of motivating and engaging students in active learning. Our foundational studies provide a corpus of over 10,000 touch interactions and nearly 7,000 gestures collected from nearly 70 adults and children ages 7 years old and up, that can help us understand the characteristics of children’s interactions in these modalities and how they differ from adults. Based on these data, we identify key design and implementation challenges of supporting children’s touch and gesture interactions, and we suggest ways to address them. For example, we find children have more trouble successfully acquiring onscreen targets and having their gestures recognized than do adults, especially the youngest age group (7 to 10 years old). The contributions of this work provide a foundation that enables touch-based interactive educational apps that increase student success.
I’ll add a post when this special issue is officially published. For now, if you’re interested, you can check out the camera-ready version.