I’m pleased to announce that the MTAGIC project has had a poster accepted to the Interaction Design and Children (IDC) 2013 conference coming up this month in New York City! This poster paper was led by UMBC Human-Centered Computing (HCC) PhD student Robin Brewer. While doing an independent study with me, Robin investigated ways of motivating young children (ages 5 to 7 years old) to complete activities during empirical studies. Her initial explorations showed that this age group found the tasks boring and tedious, even though they had been done by older kids and adults without a problem. ‘Gamifying’ the tasks by adding a points-based reward structure along with physical prizes encouraged the kids to enthusiastically complete the activities. We recommend considering such gamification components for empirical studies with this age group. You can read the abstract below. For more details, see the paper. Come check out our poster if you’ll be at the conference!
In this paper, we describe the challenges we encountered and solutions we developed while collecting mobile touch and gesture interaction data in laboratory conditions from children ages 5 to 7 years old. We identify several challenges of conducting empirical studies with young children, including study length, motivation, and environment. We then propose and validate techniques for designing study protocols for this age group, focusing on the use of gamification components to better engage children in laboratory studies. The use of gamification increased our study task completion rates from 73% to 97%. This research contributes a better understanding of how to design study protocols for young children when lab studies are needed or preferred. Research with younger age groups alongside older children, adults, and special populations can lead to more sound guidelines for universal usability of mobile applications.
The $-family of recognizers isn’t just about how to build better recognition algorithms, it’s also about understanding patterns and inconsistencies in how people make gestures, too. This kind of knowledge will help inform gesture interaction both in terms of developing better recognizers, and designing appropriate gesture sets. In this vein, I have had a paper accepted, along with my collaborators, Jacob O. Wobbrock and Radu-Daniel Vatavu, to the Graphics Interface 2013 conference, on characterizing patterns in people’s execution of surface gestures from existing datasets. The paper is titled “Understanding the Consistency of Users’ Pen and Finger Stroke Gesture Articulation,” and here is the abstract:
Little work has been done on understanding the articulation patterns of users’ touch and surface gestures, despite the importance of such knowledge to inform the design of gesture recognizers and gesture sets for different applications. We report a methodology to analyze user consistency in gesture production, both between-users and within-user, by employing articulation features such as stroke type, stroke direction, and stroke ordering, and by measuring variations in execution with geometric and kinematic gesture descriptors. We report results on four gesture datasets (40,305 samples of 63 gesture types by 113 users). We find a high degree of consistency within-users (.91), lower consistency between-users (.55), higher consistency for certain gestures (e.g., less geometrically complex shapes are more consistent than complex ones), and a loglinear relationship between number of strokes and consistency. We highlight implications of our results to help designers create better surface gesture interfaces informed by user behavior.
As usual, you may download the camera-ready version of our paper if you are interested. See you in Regina!
February was a great month over here with lots of good news coming in about conference and journal paper acceptances! The MTAGIC project will be well-represented at the upcoming CHI 2013 conference, with two workshop papers accepted on different aspects of the project (the workshops are RepliCHI and Mobile Accessibility). We’ve also heard great news that two papers about our work with kids and mobile touchscreen devices will appear at IDC 2013 and in an upcoming special issue of the Journal of Personal and Ubiquitous Computing!
In other news, my project with Jacob O. Wobbrock and Radu-Daniel Vatavu on using patterns in how people make surface gestures to inform the design of better gesture sets and gesture recognizers (e.g., the $-family of recognizers) will appear at GI 2013. And, last but not least, my side project with Leah Findlater on understanding how people with physical impairments, including children, are using mainstream mobile touchscreen devices in their daily lives will receive a ‘Best Paper Award’ at CHI 2013! This award is an honor only the top 1% of submissions receive, and we are very honored our work was selected to be among such great company.
Look for more details on each of these upcoming papers in blog posts throughout March and April, and you can already see them listed in my current CV if you are interested.
I’m pleased to announce that I have recently had a paper accepted to the upcoming CHI 2013 conference in Paris in May! This paper, entitled “Analyzing User-Generated YouTube Videos to Understand Touchscreen Use by People with Motor Impairments,” was written in collaboration with Leah Findlater, a professor of HCI at the University of Maryland (UMD), and Yoojin Kim, a Master’s student at UMD. We examined YouTube videos depicting people with physical impairments, including children, using touchscreen devices in order to understand the limitations and challenges these users are encountering.
Here’s the abstract:
Most work on the usability of touchscreen interaction for people with motor impairments has focused on lab studies with relatively few participants and small cross-sections of the population. To develop a richer characterization of use, we turned to a previously untapped source of data: YouTube videos. We collected and analyzed 187 non-commercial videos uploaded to YouTube that depicted a person with a physical disability interacting with a mainstream mobile touchscreen device. We coded the videos along a range of dimensions to characterize the interaction, the challenges encountered, and the adaptations being adopted in daily use. To complement the video data, we also invited the video uploaders to complete a survey on their ongoing use of touchscreen technology. Our findings show that, while many people with motor impairments find these devices empowering, accessibility issues still exist. In addition to providing implications for more accessible touchscreen design, we reflect on the application of user-generated content to study user interface design.
Here is the camera-ready version of this paper. See you in Paris!
***Note: we have learned that this paper will receive a CHI ‘Best Paper Award’! This award is an honor only the top 1% of submissions receive, and we are very honored our work was selected to be among such great company.
The MTAGIC project has a new paper recently accepted to the Interactive Tabletops and Surfaces 2012 conference, to be held in Cambridge, MA, in November, entitled “Interaction and Recognition Challenges in Interpreting Children’s Touch and Gesture Input on Mobile Devices.” It continues our work on investigating differences in how children and adults use mobile touchscreen devices; this paper focuses on technical challenges in interpreting kids’ touch and gesture input and how that may be more difficult than interpreting adults’ input.
Here is the abstract:
As mobile devices like the iPad and iPhone become increasingly commonplace, for many users touchscreen interactions are quickly overtaking other interaction methods in terms of frequency and experience. However, most of these devices have been designed for the general, typical user. Trends indicate that children are using these devices (either their parents’ or their own) for entertainment or learning activities. Previous work has found key differences in how children use touch and surface gesture interaction modalities vs. adults. In this paper, we specifically examine the impact of these differences in terms of automatically and reliably understanding what kids meant to do. We present a study of children and adults performing touch and surface gesture interaction tasks on mobile devices. We identify challenges related to (a) intentional and unintentional touches outside of onscreen targets and (b) recognition of drawn gestures, that both indicate a need to design tailored interaction for children to accommodate and overcome these challenges.
Check out the camera-ready paper!
Co-authors Radu-Daniel Vatavu and Jacob O. Wobbrock and I have had a paper accepted to ICMI 2012, titled “Gestures as Point Clouds: A $P Recognizer for User Interface Prototypes,” in which we introduce $P, the latest member of the $-family of gesture recognizers. $P can handle multistroke and unistroke gestures alike with high accuracy, and remedies the main limitations of $N in terms of cost to store and match against all possible multistroke permutations.
Here is the abstract:
Rapid prototyping of gesture interaction for emerging touch platforms requires that developers have access to fast, simple, and accurate gesture recognition approaches. The $-family of recognizers ($1, $N) addresses this need, but the current most advanced of these, $N-Protractor, has significant memory and execution costs due to its combinatoric gesture representation approach. We present $P, a new member of the $-family, that remedies this limitation by considering gestures as clouds of points. $P performs similarly to $1 on unistrokes and is superior to $N on multistrokes. Specifically, $P delivers >99% accuracy in user-dependent testing with 5+ training samples per gesture type and stays above 99% for user-independent tests when using data from 10 participants. We provide a pseudocode listing of $P to assist developers in porting it to their specific platform and a “cheat sheet” to aid developers in selecting the best member of the $-family for their specific application needs.
You can find the camera-ready version of the paper here. Try out $P online in your browser here!
A poster that my colleagues Sapna Prasad, Amy Hurst, Ravi Kuber, and I submitted to the ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2012) has been accepted! The poster, called “Participatory Design Workshop on Accessible Apps and Games with Students with Learning Disabilities,” reports on the participatory design workshop we ran in December with Landmark College students, which I previously posted about here and here. Here is the abstract:
This paper describes a Science-Technology-Engineering-Mathematics (STEM) outreach workshop conducted with post-secondary students diagnosed with learning differences including Learning Disabilities (LD), Attention Deficit / Hyperactivity Disorders (AD/HD), and/or Autism Spectrum Disorders (ASD). In this workshop, students were actively involved in participatory design exercises such as data gathering, identifying accessible design requirements, and evaluating mobile applications and games targeted for diverse users. This hands-on experience broadened students’ understanding of STEM areas, provided them with an opportunity to see themselves as computer scientists, and demonstrated how they might succeed in computing careers, especially in human-centered computing and interface design. Lessons learned from the workshop also offer useful insight on conducting participatory design with this unique population.
You can see more information on this project and the workshop at the project website. Check out the paper here.