Tag Archives: $-family

Paper on visualizing touchscreen gestures with heatmaps accepted to ICMI 2014!

My colleagues, Radu-Daniel Vatavu and Jacob O. Wobbrock, and I have had another paper accepted for publication! This paper continues our efforts to understand patterns and inconsistencies in how people make touchscreen gestures. This time, we introduced a way to use heatmap-style visualizations to examine articulation patterns in gesture datasets, and our paper “Gesture Heatmaps: Understanding Gesture Performance with Colorful Visualizations” was accepted to the ACM International Conference on Multimodal Interaction, to be held in Istanbul, Turkey, in November 2014. Here is the abstract:

We introduce gesture heatmaps, a novel gesture analysis technique that employs color maps to visualize the variation of local features along the gesture path. Beyond current gesture analysis practices that characterize gesture articulations with single-value descriptors, e.g., size, path length, or speed, gesture heatmaps are able to show with colorful visualizations how the value of any such descriptors vary along the gesture path. We evaluate gesture heatmaps on three public datasets comprising 15,840 gesture samples of 70 gesture types from 45 participants, on which we demonstrate heatmaps’ capabilities to (1) explain causes for recognition errors, (2) characterize users’ gesture articulation patterns under various conditions, e.g., finger versus pen gestures, and (3) help understand users’ subjective perceptions of gesture commands, such as why some gestures are perceived easier to execute than others. We also introduce chromatic confusion matrices that employ gesture heatmaps to extend the expressiveness of standard confusion matrices to better understand gesture classification performance. We believe that gesture heatmaps will prove useful to researchers and practitioners doing gesture analysis, and consequently, they will inform the design of better gesture sets and development of more accurate recognizers.

Check out the camera ready version of our paper here. Our paper will be presented as a poster at the conference, and I’ll post the PDF when available.

Leave a comment

Filed under Publication

ICMI 2013 talk posted, and toolkit released!

Last week at the ICMI 2013 conference in Sydney, Australia, I presented work done in collaboration with my co-authors Radu-Daniel Vatavu and Jacob O. Wobbrock on new ways of understanding how users make stroke gestures (for example, with stylus and finger on touchscreen devices), through the use of 12 “Relative Accuracy Measures for Stroke Gestures” that our paper introduced. The paper has details on the measures themselves and how they are derived; the talk focuses on what these types of measures can be used for and how they can help us design and build better gesture interactions. For those interested, my presentation slides are available here.

We have also released an open-source toolkit, which we call “GREAT” (Gesture RElative Accuracy Toolkit) that you can use to compute the measures on your own dataset. Download it here!

Leave a comment

Filed under Talk / Presentation

Paper on relative gesture accuracy measures accepted to ICMI 2013!

The $-family of gesture recognizers project has been working on more ways to characterize patterns in how users make gestures, in the form of new gesture accuracy measures (see our GI 2013 paper for the first set of measures we developed). This new set focuses on relative accuracy, or degree to which two gestures match locally rather than just simply global absolutes. Our paper introducing these measures has been accepted to ICMI 2013! My co-authors are Radu-Daniel Vatavu and Jacob O. Wobbrock. Here is the abstract:

Current measures of stroke gesture articulation lack descriptive power because they only capture absolute characteristics about the gesture as a whole, not fine-grained features that reveal subtleties about the gesture articulation path. We present a set of twelve new relative accuracy measures for stroke gesture articulation that characterize the geometric, kinematic, and articulation accuracy of single and multistroke gestures. To compute the accuracy measures, we introduce the concept of a gesture task axis. We evaluate our measures on five public datasets comprising 38,245 samples from107 participants, about which we make new discoveries; e.g., gestures articulated at fast speed are shorter in path length than slow or medium-speed gestures, but their path lengths vary the most, a finding that helps understand recognition performance. This
work will enable a better understanding of users’ stroke gesture articulation behavior, ultimately leading to better gesture set designs and more accurate recognizers.

I’ll be at ICMI 2013 in Sydney, Australia, in December (summer down under!) to present the paper. Come ask me about the details! In the meantime, check out the camera-ready version of our paper here.

Leave a comment

Filed under Publication

Paper on patterns in how people make surface gestures accepted to GI 2013!

The $-family of recognizers isn’t just about how to build better recognition algorithms, it’s also about understanding patterns and inconsistencies in how people make gestures, too. This kind of knowledge will help inform gesture interaction both in terms of developing better recognizers, and designing appropriate gesture sets. In this vein, I have had a paper accepted, along with my collaborators, Jacob O. Wobbrock and Radu-Daniel Vatavu, to the Graphics Interface 2013 conference, on characterizing patterns in people’s execution of surface gestures from existing datasets. The paper is titled “Understanding the Consistency of Users’ Pen and Finger Stroke Gesture Articulation,” and here is the abstract:

Little work has been done on understanding the articulation patterns of users’ touch and surface gestures, despite the importance of such knowledge to inform the design of gesture recognizers and gesture sets for different applications. We report a methodology to analyze user consistency in gesture production, both between-users and within-user, by employing articulation features such as stroke type, stroke direction, and stroke ordering, and by measuring variations in execution with geometric and kinematic gesture descriptors. We report results on four gesture datasets (40,305 samples of 63 gesture types by 113 users). We find a high degree of consistency within-users (.91), lower consistency between-users (.55), higher consistency for certain gestures (e.g., less geometrically complex shapes are more consistent than complex ones), and a loglinear relationship between number of strokes and consistency. We highlight implications of our results to help designers create better surface gesture interfaces informed by user behavior.

As usual, you may download the camera-ready version of our paper if you are interested. See you in Regina!

Leave a comment

Filed under Publication

Upcoming papers to appear at CHI 2013, GI 2013, IDC 2013, and CHI 2013 best paper award!

February was a great month over here with lots of good news coming in about conference and journal paper acceptances! The MTAGIC project will be well-represented at the upcoming CHI 2013 conference, with two workshop papers accepted on different aspects of the project (the workshops are RepliCHI and Mobile Accessibility). We’ve also heard great news that two papers about our work with kids and mobile touchscreen devices will appear at IDC 2013 and in an upcoming special issue of the Journal of Personal and Ubiquitous Computing!

In other news, my project with Jacob O. Wobbrock and Radu-Daniel Vatavu on using patterns in how people make surface gestures to inform the design of better gesture sets and gesture recognizers (e.g., the $-family of recognizers) will appear at GI 2013. And, last but not least, my side project with Leah Findlater on understanding how people with physical impairments, including children, are using mainstream mobile touchscreen devices in their daily lives will receive a ‘Best Paper Award’ at CHI 2013! This award is an honor only the top 1% of submissions receive, and we are very honored our work was selected to be among such great company.

Look for more details on each of these upcoming papers in blog posts throughout March and April, and you can already see them listed in my current CV if you are interested.

Leave a comment

Filed under Publication

C# implementation of $P recognizer available, online demo in JavaScript!

We have recently made available a reference implementation of our $P recognizer in C#, which you can find on the $P project page. This version augments our original online demo and implementation in JavaScript, both still available as well. When you download the C# .zip file, you receive (1) a DLL of just the recognizer which you can use in your C# applications, (2) a canvas drawing and recognizing demo in C# equivalent to the online JavaScript demo, and (3) a “How To” document explaining how to incorporate these versions in your own projects. Try it out and let us know how it goes!

A reminder: if you implement $P in a new language or in a new way, feel free to let us know and we will link to it from our page as well! Don’t forget to cite us!

Leave a comment

Filed under Software / Data

New $N multistroke gesture set released!

I am pleased to announce that the Mixed Multistroke Gestures (MMG) dataset from our GI 2012 paper is now publicly available for download! It contains samples from 20 people who entered each of 16 gesture types 10 times, using either their finger or a stylus on a Tablet PC, at three different speeds (slow, medium, fast), for a total of 9600 samples. The samples are stored in the $N Recognizer‘s data format, and each person’s samples are separated into user-speed sub-folders. See more details on the gestures, the users who entered them, and $N’s accuracy in recognizing them in our GI 2012 paper. You may download the dataset here. If you use it in your work, please cite us!

Leave a comment

Filed under Software / Data