Tag Archives: paper accepted

IDC 2013 paper accepted on feedback for gesture interaction with kids!

Over the past year on the MTAGIC project, we’ve been investigating differences in how children and adults make gestures and touch targets on mobile touchscreen devices. We have designed our study tasks to reflect the designs of existing apps on the market today, and have recently examined our data to understand the impact of visual feedback on gesture interaction for kids. Our paper on this topic, “Examining the Need for Visual Feedback during Gesture Interaction on Mobile Touchscreen Devices for Kids,” has been accepted to the Interaction Design and Children (IDC) conference! Read the abstract:

Surface gesture interaction styles used on modern mobile touchscreen devices are often dependent on the platform and application. Some applications show a visual trace of gesture input as it is made by the user, whereas others do not.  Little work has been done examining the usability of visual feedback for surface gestures, especially for children. In this paper, we present results from an empirical study conducted with children, teens, and adults to explore characteristics of  gesture interaction with and without visual feedback. We find that the gestures generated with and without visual feedback by users of different ages diverge significantly in ways that make them difficult to interpret. In addition, users prefer to see visual feedback. Based on these findings, we present several design recommendations for new surface gesture interfaces for children, teens, and adults on mobile touchscreen devices. In general, we recommend providing visual feedback, especially for children, wherever possible.

As usual, you can find the camera-ready version of this paper here. See you in New York City this June!

Leave a comment

Filed under Publication

Special Issue of JPUC to appear with article on MTAGIC project!

An upcoming special issue of the Springer Journal of Personal and Ubiquitous Computing (JPUC) on Educational Interfaces, Software, and Technology (EIST) will include an article on the MTAGIC project! This article, entitled “Designing Smarter Touch-Based Interfaces for Educational Contexts,” is an extension of our CHI 2012 EIST workshop paper. We report our foundational studies investigating children’s touch and gesture input patterns, and how they differ from adults, with some discussion of how these findings will impact the design and development of educational apps for touchscreen devices. Here is the abstract:

In next-generation classrooms and educational environments, interactive technologies such as surface computing, natural gesture interfaces, and mobile devices will enable new means of motivating and engaging students in active learning. Our foundational studies provide a corpus of over 10,000 touch interactions and nearly 7,000 gestures collected from nearly 70 adults and children ages 7 years old and up,  that can help us understand the characteristics of children’s interactions in these modalities and how they differ from adults. Based on these data, we identify key design and implementation challenges of supporting children’s touch and gesture interactions, and we suggest ways to address them. For example, we find children have more trouble successfully acquiring onscreen targets and having their gestures recognized than do adults, especially the youngest age group (7 to 10 years old). The contributions of this work provide a foundation that enables touch-based interactive educational apps that increase student success.

I’ll add a post when this special issue is officially published. For now, if you’re interested, you can check out the camera-ready version.

Leave a comment

Filed under Publication

Paper on patterns in how people make surface gestures accepted to GI 2013!

The $-family of recognizers isn’t just about how to build better recognition algorithms, it’s also about understanding patterns and inconsistencies in how people make gestures, too. This kind of knowledge will help inform gesture interaction both in terms of developing better recognizers, and designing appropriate gesture sets. In this vein, I have had a paper accepted, along with my collaborators, Jacob O. Wobbrock and Radu-Daniel Vatavu, to the Graphics Interface 2013 conference, on characterizing patterns in people’s execution of surface gestures from existing datasets. The paper is titled “Understanding the Consistency of Users’ Pen and Finger Stroke Gesture Articulation,” and here is the abstract:

Little work has been done on understanding the articulation patterns of users’ touch and surface gestures, despite the importance of such knowledge to inform the design of gesture recognizers and gesture sets for different applications. We report a methodology to analyze user consistency in gesture production, both between-users and within-user, by employing articulation features such as stroke type, stroke direction, and stroke ordering, and by measuring variations in execution with geometric and kinematic gesture descriptors. We report results on four gesture datasets (40,305 samples of 63 gesture types by 113 users). We find a high degree of consistency within-users (.91), lower consistency between-users (.55), higher consistency for certain gestures (e.g., less geometrically complex shapes are more consistent than complex ones), and a loglinear relationship between number of strokes and consistency. We highlight implications of our results to help designers create better surface gesture interfaces informed by user behavior.

As usual, you may download the camera-ready version of our paper if you are interested. See you in Regina!

Leave a comment

Filed under Publication

Upcoming papers to appear at CHI 2013, GI 2013, IDC 2013, and CHI 2013 best paper award!

February was a great month over here with lots of good news coming in about conference and journal paper acceptances! The MTAGIC project will be well-represented at the upcoming CHI 2013 conference, with two workshop papers accepted on different aspects of the project (the workshops are RepliCHI and Mobile Accessibility). We’ve also heard great news that two papers about our work with kids and mobile touchscreen devices will appear at IDC 2013 and in an upcoming special issue of the Journal of Personal and Ubiquitous Computing!

In other news, my project with Jacob O. Wobbrock and Radu-Daniel Vatavu on using patterns in how people make surface gestures to inform the design of better gesture sets and gesture recognizers (e.g., the $-family of recognizers) will appear at GI 2013. And, last but not least, my side project with Leah Findlater on understanding how people with physical impairments, including children, are using mainstream mobile touchscreen devices in their daily lives will receive a ‘Best Paper Award’ at CHI 2013! This award is an honor only the top 1% of submissions receive, and we are very honored our work was selected to be among such great company.

Look for more details on each of these upcoming papers in blog posts throughout March and April, and you can already see them listed in my current CV if you are interested.

Leave a comment

Filed under Publication

CHI 2013 paper accepted on touchscreen use by people with physical impairments!

I’m pleased to announce that I have recently had a paper accepted to the upcoming CHI 2013 conference in Paris in May! This paper, entitled “Analyzing User-Generated YouTube Videos to Understand Touchscreen Use by People with Motor Impairments,” was written in collaboration with Leah Findlater, a professor of HCI at the University of Maryland (UMD), and Yoojin Kim, a Master’s student at UMD. We examined YouTube videos depicting people with physical impairments, including children, using touchscreen devices in order to understand the limitations and challenges these users are encountering.

Here’s the abstract:

Most work on the usability of touchscreen interaction for people with motor impairments has focused on lab studies with relatively few participants and small cross-sections of the population. To develop a richer characterization of use, we turned to a previously untapped source of data: YouTube videos. We collected and analyzed 187 non-commercial videos uploaded to YouTube that depicted a person with a physical disability interacting with a mainstream mobile touchscreen device. We coded the videos along a range of dimensions to characterize the interaction, the challenges encountered, and the adaptations being adopted in daily use. To complement the video data, we also invited the video uploaders to complete a survey on their ongoing use of touchscreen technology. Our findings show that, while many people with motor impairments find these devices empowering, accessibility issues still exist. In addition to providing implications for more accessible touchscreen design, we reflect on the application of user-generated content to study user interface design.

Here is the camera-ready version of this paper. See you in Paris!

***Note: we have learned that this paper will receive a CHI ‘Best Paper Award’! This award is an honor only the top 1% of submissions receive, and we are very honored our work was selected to be among such great company.

Leave a comment

Filed under Publication