Category Archives: Publication

Upcoming papers to appear at CHI 2013, GI 2013, IDC 2013, and CHI 2013 best paper award!

February was a great month over here with lots of good news coming in about conference and journal paper acceptances! The MTAGIC project will be well-represented at the upcoming CHI 2013 conference, with two workshop papers accepted on different aspects of the project (the workshops are RepliCHI and Mobile Accessibility). We’ve also heard great news that two papers about our work with kids and mobile touchscreen devices will appear at IDC 2013 and in an upcoming special issue of the Journal of Personal and Ubiquitous Computing!

In other news, my project with Jacob O. Wobbrock and Radu-Daniel Vatavu on using patterns in how people make surface gestures to inform the design of better gesture sets and gesture recognizers (e.g., the $-family of recognizers) will appear at GI 2013. And, last but not least, my side project with Leah Findlater on understanding how people with physical impairments, including children, are using mainstream mobile touchscreen devices in their daily lives will receive a ‘Best Paper Award’ at CHI 2013! This award is an honor only the top 1% of submissions receive, and we are very honored our work was selected to be among such great company.

Look for more details on each of these upcoming papers in blog posts throughout March and April, and you can already see them listed in my current CV if you are interested.

Leave a comment

Filed under Publication

IUI 2013 workshop paper accepted on pen gesture features and cognitive load!

A paper on the Multimodal Stress Detection (MSD) project titled “Further Investigating Pen Gesture Features Sensitive to Cognitive Load” has recently been accepted to an IUI 2013 workshop on Interacting with Smart Objects (ISO). The paper is a continuation of the project’s first workshop paper on MSD (presented at the ICMI 2011 MMCogEmS workshop), which examined pen-input features (which we termed “gesture dynamics”) that would be reliable detectors of cognitive load. Along with collaborators at NICTA in Australia, I have been continuing to look at ways to use natural input behaviors to detect changes in cognitive states, specifically, stress. The new paper, first-authored by NICTA student intern Ling Luo, finds several new interesting pen-gesture features to show detectable changes in the presence of cognitive stress. In the long-term, this project may yield useful insights for adaptive accessible systems, and I plan to look at similar research questions for children using such systems. For more information, see the camera-ready version of the paper; or check out the abstract below:

A person’s cognitive state and capacity at a given moment strongly impact decision making and user experience, but are still very difficult to evaluate objectively, unobtrusively, and in real-time. Focusing on smart pen or stylus input, this paper explores features capable of detecting high cognitive load in a practical set-up. A user experiment was conducted in which participants were instructed to perform a vigilance-oriented, continuous attention, visual search task, controlled by handwriting single characters on an interactive tablet. Task difficulty was manipulated through the amount and pace of both target events and distractors being displayed. Statistical analysis results indicate that both the gesture length and  width over height ratio decreased significantly during the high load periods of the task. Another feature, the symmetry of the letter ‘m’, shows that participants tend to oversize the second arch under higher mental loads. Such features can be computed very efficiently, so these early results are encouraging towards the possibility of building smart pens or styluses that will be able to assess cognitive load unobtrusively and in real-time.

Leave a comment

Filed under Publication

CHI 2013 paper accepted on touchscreen use by people with physical impairments!

I’m pleased to announce that I have recently had a paper accepted to the upcoming CHI 2013 conference in Paris in May! This paper, entitled “Analyzing User-Generated YouTube Videos to Understand Touchscreen Use by People with Motor Impairments,” was written in collaboration with Leah Findlater, a professor of HCI at the University of Maryland (UMD), and Yoojin Kim, a Master’s student at UMD. We examined YouTube videos depicting people with physical impairments, including children, using touchscreen devices in order to understand the limitations and challenges these users are encountering.

Here’s the abstract:

Most work on the usability of touchscreen interaction for people with motor impairments has focused on lab studies with relatively few participants and small cross-sections of the population. To develop a richer characterization of use, we turned to a previously untapped source of data: YouTube videos. We collected and analyzed 187 non-commercial videos uploaded to YouTube that depicted a person with a physical disability interacting with a mainstream mobile touchscreen device. We coded the videos along a range of dimensions to characterize the interaction, the challenges encountered, and the adaptations being adopted in daily use. To complement the video data, we also invited the video uploaders to complete a survey on their ongoing use of touchscreen technology. Our findings show that, while many people with motor impairments find these devices empowering, accessibility issues still exist. In addition to providing implications for more accessible touchscreen design, we reflect on the application of user-generated content to study user interface design.

Here is the camera-ready version of this paper. See you in Paris!

***Note: we have learned that this paper will receive a CHI ‘Best Paper Award’! This award is an honor only the top 1% of submissions receive, and we are very honored our work was selected to be among such great company.

Leave a comment

Filed under Publication

Interactive Tabletops & Surfaces 2012 paper accepted!

The MTAGIC project has a new paper recently accepted to the Interactive Tabletops and Surfaces 2012 conference, to be held in Cambridge, MA, in November, entitled “Interaction and Recognition Challenges in Interpreting Children’s Touch and Gesture Input on Mobile Devices.” It continues our work on investigating differences in how children and adults use mobile touchscreen devices; this paper focuses on technical challenges in interpreting kids’ touch and gesture input and how that may be more difficult than interpreting adults’ input.

Here is the abstract:

As mobile devices like the iPad and iPhone become increasingly commonplace, for many users touchscreen interactions are quickly overtaking other interaction methods in terms of frequency and experience. However, most of these devices have been designed for the general, typical user. Trends indicate that children are using these devices (either their parents’ or their own) for entertainment or learning activities. Previous work has found key differences in how children use touch and surface gesture interaction modalities vs. adults. In this paper, we specifically examine the impact of these differences in terms of automatically and reliably understanding what kids meant to do. We present a study of children and adults performing touch and surface gesture interaction tasks on mobile devices. We identify challenges related to (a) intentional and unintentional touches outside of onscreen targets and (b) recognition of drawn gestures, that both indicate a need to design tailored interaction for children to accommodate and overcome these challenges.

Check out the camera-ready paper!

Leave a comment

Filed under Publication

Paper on new $P recognizer accepted to ICMI 2012!

Co-authors Radu-Daniel Vatavu and Jacob O. Wobbrock and I have had a paper accepted to ICMI 2012, titled “Gestures as Point Clouds: A $P Recognizer for User Interface Prototypes,” in which we introduce $P, the latest member of the $-family of gesture recognizers. $P can handle multistroke and unistroke gestures alike with high accuracy, and remedies the main limitations of $N in terms of cost to store and match against all possible multistroke permutations.

Here is the abstract:

Rapid prototyping of gesture interaction for emerging touch platforms requires that developers have access to fast, simple, and accurate gesture recognition approaches. The $-family of recognizers ($1, $N) addresses this need, but the current most advanced of these, $N-Protractor, has signi ficant memory and execution costs due to its combinatoric gesture representation approach. We present $P, a new member of the $-family, that remedies this limitation by considering gestures as clouds of points. $P performs similarly to $1 on unistrokes and is superior to $N on multistrokes. Speci fically, $P delivers >99% accuracy in user-dependent testing with 5+ training samples per gesture type and stays above 99% for user-independent tests when using data from 10 participants. We provide a pseudocode listing of $P to assist developers in porting it to their speci fic platform and a “cheat sheet” to aid developers in selecting the best member of the $-family for their speci fic application needs.

You can find the camera-ready version of the paper here. Try out $P online in your browser here!

Leave a comment

Filed under Publication