My NSF-funded MTAGIC project will appear at two workshops at the upcoming CHI 2013 conference in Paris, France! The first is the RepliCHI workshop, which is focusing on what role replication studies can play in the HCI literature. In our paper “Challenges of Replicating Empirical Studies with Children in HCI,” we are presenting a series of empirical studies that we have run with different age groups over the past 18 months, essentially replicating a similar methodology. We will specifically describe how our methodology has had to be adapted to work with very young children. This will be a two-day workshop. For more information, check out the camera-ready version of our paper; here is the abstract for a quick overview:
In this paper, we discuss the challenges of conducting a direct replication of a series of mobile device usability studies that were originally conducted with adults and older children (ages 7 to 17). The original studies were designed to investigate differences in how adults and children use mobile devices to touch targets and create surface gestures. In this paper, we report on a replication we conducted with young children (ages 5 to 7). We discuss several methodological changes that were needed to elicit the same quality of data from the replication with young children as had been obtained from the older children and adults. The insights we present are relevant to the extension of empirical studies in HCI in general to younger children.
The second workshop is the Mobile Accessibility workshop, which is focusing on how to improve the accessibility of mobile devices to users with different abilities and to users in different contexts. In our paper “Towards Designing Adaptive Touch-Based Interfaces,” we are presenting our vision of how the work we’ve been doing on MTAGIC will lead to universally accessible mobile touchscreen interaction, by highlighting some of the technical extensions we believe our work points to. Again, for more information, check out our camera-ready paper, and here is the abstract:
As the use of mobile devices by non-typical users increases, so does the need for platforms that can support the unique ways in which these special users engage with them. We posit that, by developing an understanding of patterns in input behaviors for different user groups, we can design and develop interactions that support such non-typical users. We prove this technique with children: we present findings from two empirical studies showing how interaction patterns differ among younger children, older children, and adults. These findings point to a model of how to develop touch-based interactive technologies that can adapt to users of different ages or abilities. Such adaptations will serve to better support natural interactions by user populations with distinctive needs.
If you work in the area of kids and touch + gesture interaction, or mobile device interaction in general, find a MTAGIC project member at CHI and say hi!
A paper on the Multimodal Stress Detection (MSD) project titled “Further Investigating Pen Gesture Features Sensitive to Cognitive Load” has recently been accepted to an IUI 2013 workshop on Interacting with Smart Objects (ISO). The paper is a continuation of the project’s first workshop paper on MSD (presented at the ICMI 2011 MMCogEmS workshop), which examined pen-input features (which we termed “gesture dynamics”) that would be reliable detectors of cognitive load. Along with collaborators at NICTA in Australia, I have been continuing to look at ways to use natural input behaviors to detect changes in cognitive states, specifically, stress. The new paper, first-authored by NICTA student intern Ling Luo, finds several new interesting pen-gesture features to show detectable changes in the presence of cognitive stress. In the long-term, this project may yield useful insights for adaptive accessible systems, and I plan to look at similar research questions for children using such systems. For more information, see the camera-ready version of the paper; or check out the abstract below:
A person’s cognitive state and capacity at a given moment strongly impact decision making and user experience, but are still very difficult to evaluate objectively, unobtrusively, and in real-time. Focusing on smart pen or stylus input, this paper explores features capable of detecting high cognitive load in a practical set-up. A user experiment was conducted in which participants were instructed to perform a vigilance-oriented, continuous attention, visual search task, controlled by handwriting single characters on an interactive tablet. Task difficulty was manipulated through the amount and pace of both target events and distractors being displayed. Statistical analysis results indicate that both the gesture length and width over height ratio decreased significantly during the high load periods of the task. Another feature, the symmetry of the letter ‘m’, shows that participants tend to oversize the second arch under higher mental loads. Such features can be computed very efficiently, so these early results are encouraging towards the possibility of building smart pens or styluses that will be able to assess cognitive load unobtrusively and in real-time.
Last month, my colleague Quincy Brown and I presented our project on Mobile Touch and Gesture Interaction for Children (MTAGIC) at the CHI 2012 EIST workshop. We gave an overview of the work we’ve done so far and a little preview of what we plan to do next. The slides for the talk are posted here. See the EIST 2012 program for all the papers and talks.
My colleague Quincy Brown and I had a paper accepted to the CHI 2012 workshop on Educational Interfaces, Software and Technology (EIST), titled “Toward Comparing the Touchscreen Interaction Patterns of Kids and Adults.” Our paper presented analysis on differences we have found between children and adults in the ways that they use touch and gesture interaction, including challenges of touch target acquisition and gesture generation. The long-term goal of this work is to design and develop interactions that are more successful for children by taking into account these inherent differences. Here is the abstract:
Touchscreen interactions are increasingly more commonplace with the mainstream adoption of devices like the iPad and iPhone. Kids are using their parents’ devices for entertainment, learning, and discovery, but the interactions have not always been designed with kids in mind. In this paper we discuss the results of our explorations of differences between children and adults on a dataset of touch- and gesture-based interactions. We find evidence for significant differences and discuss how these can be considered in design.
The camera-ready version of the paper is located here.
In a previous post, I mentioned that my colleagues at UMBC and Landmark College and I had an AccessComputing minigrant accepted for funding to run a “Participatory Design Workshop for Accessible Apps and Games” at Landmark, a small 2-year college in Vermont that serves students with learning and cognitive disabilities.In early December, we actually had the opportunity to run the workshop and it was a great success! The purpose of the workshop was to expose Landmark students to some of the basic principles of human-computer interaction, focusing on participatory design and taking into account user needs and characteristics when designing technology. UMBC students led the participatory design sessions on mobile apps and games they were designing as part of a current course project.
Throughout the workshop, we noted that the Landmark students were very engaged and enjoyed the design activities very much. They were a group of kids with a technology-orientation (the course they were recruited from was a web design course), and heavily involved in playing video games. The apps we brought were gaming-oriented, if not full-fledged video games, and this aspect seemed to really appeal to the Landmark students. The UMBC students remarked on how much they learned about working with users in small participatory design groups as well. We look forward to maybe doing future versions of this workshop, or other collaborative activities with UMBC and Landmark students.
You can find the information on the ongoing collaboration between Landmark and UMBC, as well as specifics about the event and some of the outcomes here (stay tuned for updates!).
The ICMI 2011 MMCogEmS workshop was held today and our talk was very well-received! I am confident that some interesting collaboration opportunities will come out of participating in this workshop. The slides for the talk itself are posted here. See the ICMI 2011 MMCogEmS website for all the papers and talks.
We just had a paper accepted to the ICMI 2011 workshop on “Inferring Cognitive and Emotional States from Multimodal Measures (MMCogEmS)“! The paper is called “Gesture Dynamics: Features Sensitive to Task Difficulty and Correlated with Physiological Sensors” and reports partial results from our recent study in the Multimodal Stress Detection project. The study included many modalities of input, but this paper focuses on the gesture modality. Here is the abstract:
This paper presents preliminary results regarding which features of pen-based gesture input are sensitive to cognitive stress when manipulated via changes in task difficulty. We conducted a laboratory study in which participants performed a vigilance-oriented continuous attention and visual search task. Responses to the search stimuli were entered via pen gestures (e.g., drawing a letter corresponding to the stimulus). Task difficulty was increased during predefined intervals. Participants’ input behaviors were logged, allowing for analysis of gesture input patterns for features sensitive to changes in task difficulty. We also collected physiological sensor readings (e.g., skin temperature, pulse rate, and respiration rate). Input behavior features such as gesture size and pen pressure were not affected by task difficulty, but gesture duration and length were affected. Task difficulty also affected physiological sensors, notably pulse rate. Results indicate that both gesture dynamics and physiological sensors can be used to detect changes in difficulty-induced stress.
Here’s the camera-ready version of the paper.