Tag Archives: $N

Paper on patterns in how people make surface gestures accepted to GI 2013!

The $-family of recognizers isn’t just about how to build better recognition algorithms, it’s also about understanding patterns and inconsistencies in how people make gestures, too. This kind of knowledge will help inform gesture interaction both in terms of developing better recognizers, and designing appropriate gesture sets. In this vein, I have had a paper accepted, along with my collaborators, Jacob O. Wobbrock and Radu-Daniel Vatavu, to the Graphics Interface 2013 conference, on characterizing patterns in people’s execution of surface gestures from existing datasets. The paper is titled “Understanding the Consistency of Users’ Pen and Finger Stroke Gesture Articulation,” and here is the abstract:

Little work has been done on understanding the articulation patterns of users’ touch and surface gestures, despite the importance of such knowledge to inform the design of gesture recognizers and gesture sets for different applications. We report a methodology to analyze user consistency in gesture production, both between-users and within-user, by employing articulation features such as stroke type, stroke direction, and stroke ordering, and by measuring variations in execution with geometric and kinematic gesture descriptors. We report results on four gesture datasets (40,305 samples of 63 gesture types by 113 users). We find a high degree of consistency within-users (.91), lower consistency between-users (.55), higher consistency for certain gestures (e.g., less geometrically complex shapes are more consistent than complex ones), and a loglinear relationship between number of strokes and consistency. We highlight implications of our results to help designers create better surface gesture interfaces informed by user behavior.

As usual, you may download the camera-ready version of our paper if you are interested. See you in Regina!

Leave a comment

Filed under Publication

New $N multistroke gesture set released!

I am pleased to announce that the Mixed Multistroke Gestures (MMG) dataset from our GI 2012 paper is now publicly available for download! It contains samples from 20 people who entered each of 16 gesture types 10 times, using either their finger or a stylus on a Tablet PC, at three different speeds (slow, medium, fast), for a total of 9600 samples. The samples are stored in the $N Recognizer‘s data format, and each person’s samples are separated into user-speed sub-folders. See more details on the gestures, the users who entered them, and $N’s accuracy in recognizing them in our GI 2012 paper. You may download the dataset here. If you use it in your work, please cite us!

Leave a comment

Filed under Software / Data

GI 2012 talk posted!

Last week I presented our work on enhancing our $N multistroke recognizer by integrating Yang Li‘s Protractor matching method at GI 2012. The presentation was more algorithm-focused than most of the HCI talks in the conference (and most of my other work), but still generated a few interesting questions. The slides for the talk are posted here. Stay tuned for future enhancements to the $-family of recognizers!

Leave a comment

Filed under Talk / Presentation

Paper accepted to Graphics Interface 2012 as a Note!

My colleague Jacob O. Wobbrock and I have had a paper accepted to the Graphics Interface 2012 conference! We extend our $N multistroke recognizer to use the closed-form matching method of Yang Li‘s Protractor, speeding up the matching process significantly. The paper is titled “$N-Protractor: A Fast and Accurate Multistroke Recognizer,” and here is the abstract:

Prior work introduced $N, a simple multistroke gesture recognizer based on template matching, intended to be easy to port to new platforms for rapid prototyping, and derived from the unistroke $1 recognizer. $N uses an iterative search method to find the optimal angular alignment between two gesture templates, like $1 before it. Since then, Protractor has been introduced, a unistroke pen and finger gesture recognition algorithm also based on template-matching and $1, but using a closed-form template-matching method instead of an iterative search method, considerably improving recognition speed over $1. This paper presents work to streamline $N with Protractor by using Protractor’s closed-form matching approach, and demonstrates that similar speed benefits occur for multistroke gestures from datasets from multiple domains. We find that the Protractor enhancements are over 91% faster than the original $N, and negligibly less accurate (<0.2%). We also discuss the impact that the number of templates, the input speed, and input method (e.g., pen vs. finger) have on recognition accuracy, and examine the most confusable gestures.

Check out the camera ready version here. For the pseudocode of this method, see the $N project website.

Leave a comment

Filed under Publication