Tag Archives: $P

Paper on patterns in how people make surface gestures accepted to GI 2013!

The $-family of recognizers isn’t just about how to build better recognition algorithms, it’s also about understanding patterns and inconsistencies in how people make gestures, too. This kind of knowledge will help inform gesture interaction both in terms of developing better recognizers, and designing appropriate gesture sets. In this vein, I have had a paper accepted, along with my collaborators, Jacob O. Wobbrock and Radu-Daniel Vatavu, to the Graphics Interface 2013 conference, on characterizing patterns in people’s execution of surface gestures from existing datasets. The paper is titled “Understanding the Consistency of Users’ Pen and Finger Stroke Gesture Articulation,” and here is the abstract:

Little work has been done on understanding the articulation patterns of users’ touch and surface gestures, despite the importance of such knowledge to inform the design of gesture recognizers and gesture sets for different applications. We report a methodology to analyze user consistency in gesture production, both between-users and within-user, by employing articulation features such as stroke type, stroke direction, and stroke ordering, and by measuring variations in execution with geometric and kinematic gesture descriptors. We report results on four gesture datasets (40,305 samples of 63 gesture types by 113 users). We find a high degree of consistency within-users (.91), lower consistency between-users (.55), higher consistency for certain gestures (e.g., less geometrically complex shapes are more consistent than complex ones), and a loglinear relationship between number of strokes and consistency. We highlight implications of our results to help designers create better surface gesture interfaces informed by user behavior.

As usual, you may download the camera-ready version of our paper if you are interested. See you in Regina!

Leave a comment

Filed under Publication

C# implementation of $P recognizer available, online demo in JavaScript!

We have recently made available a reference implementation of our $P recognizer in C#, which you can find on the $P project page. This version augments our original online demo and implementation in JavaScript, both still available as well. When you download the C# .zip file, you receive (1) a DLL of just the recognizer which you can use in your C# applications, (2) a canvas drawing and recognizing demo in C# equivalent to the online JavaScript demo, and (3) a “How To” document explaining how to incorporate these versions in your own projects. Try it out and let us know how it goes!

A reminder: if you implement $P in a new language or in a new way, feel free to let us know and we will link to it from our page as well! Don’t forget to cite us!

Leave a comment

Filed under Software / Data

ICMI talk on $P posted, and paper award!

Last week at the ICMI 2012 conference, I presented a new gesture recognizer in the $-family, called $P. $P is highly accurate, needing few training examples or templates, and is able to handle gestures made with any number of strokes in any order or direction, but uses simple concepts that make it accessible to those other than experts in machine learning or pattern matching. Find my presentation slides here.

We also had some great news at the conference: my co-authors Radu-Daniel Vatavu and Jacob O. Wobbrock and I received an ‘Outstanding Paper Award’ for this paper!

Leave a comment

Filed under Talk / Presentation

Paper on new $P recognizer accepted to ICMI 2012!

Co-authors Radu-Daniel Vatavu and Jacob O. Wobbrock and I have had a paper accepted to ICMI 2012, titled “Gestures as Point Clouds: A $P Recognizer for User Interface Prototypes,” in which we introduce $P, the latest member of the $-family of gesture recognizers. $P can handle multistroke and unistroke gestures alike with high accuracy, and remedies the main limitations of $N in terms of cost to store and match against all possible multistroke permutations.

Here is the abstract:

Rapid prototyping of gesture interaction for emerging touch platforms requires that developers have access to fast, simple, and accurate gesture recognition approaches. The $-family of recognizers ($1, $N) addresses this need, but the current most advanced of these, $N-Protractor, has signi ficant memory and execution costs due to its combinatoric gesture representation approach. We present $P, a new member of the $-family, that remedies this limitation by considering gestures as clouds of points. $P performs similarly to $1 on unistrokes and is superior to $N on multistrokes. Speci fically, $P delivers >99% accuracy in user-dependent testing with 5+ training samples per gesture type and stays above 99% for user-independent tests when using data from 10 participants. We provide a pseudocode listing of $P to assist developers in porting it to their speci fic platform and a “cheat sheet” to aid developers in selecting the best member of the $-family for their speci fic application needs.

You can find the camera-ready version of the paper here. Try out $P online in your browser here!

Leave a comment

Filed under Publication