Abstract

This project brings together three research areas in a combination that is now poised to advance the field of human-computer interaction. We combine new technology for brain activity measurement using functional near infrared spectroscopy (fNIRs), the use of machine learning for analyzing user data in human-computer interaction (HCI), and experience in designing, implementing, and evaluating non-command, adaptive user interfaces from our work on eye movement-based interaction.

In particular, fNIRs is still a research modality. It has rarely been used in combination with either machine learning or HCI, and, to our knowledge, the use of fNIRs as a real-time input to an adaptable interface breaks new ground. We believe this combination will also lead to new, more objective methods for evaluating next generation interaction styles. We will bring our concept of reality-based interaction, described below, to this study. It focuses on the ways that new interaction styles exploit the user's pre-existing skills and expectations from the real world more than trained computer skills. It also helps us differentiate mental effort devoted to interface-related or syntactic aspects from that devoted to the underlying task or semantic aspects. Bringing these three fields together will open up a new area of HCI research. We hope to advance the theory and evaluation of interaction styles as well as the development of new types of interactive interfaces by using human brain activity as 1) a more objective measure for evaluating emerging interaction styles and 2) an input to adaptable user interfaces.

The expertise and collaboration needed to connect fNIRs, machine learning, and HCI is difficult to obtain in a single research team. We are well positioned at Tufts, with strong researchers in all three fields in the same or neighboring departments within the Engineering School. This proposal begins with background information and a review of literature relevant to our proposed research. Next, we present results from our feasibility study, demonstrating that our technical approach is working and our goals appear viable. Finally, we describe four phases of our proposed research and summarize the intellectual merit and broader impacts of the project.

The feasibility study results presented below show that, using our fNIRs measurements and machine learning algorithms, we were able to classify five different types of user workload in a computer task with average accuracy exceeding 95%.

In Phase 1 of our proposed work, we will refine our process and algorithms to give us a reliable and repeatable procedure for real time user workload measurement. We will also extend our work to continuously varying workload levels and address the removal of motion artifacts from the data.

In Phase 2, we will combine EEG measurements with fNIRs, to use their complementary properties to obtain a fuller picture of the user's brain activity.

Phase 3 applies this technology as a tool to evaluate user interfaces. In particular, we will focus on next generation, non-WIMP, reality-based user interfaces, which are less amenable to traditional evaluation techniques. The need for new techniques to evaluate these interfaces is a growing problem within the field of HCI [27]. The goal of new interfaces might differ from traditional ones. For example, a next generation interface might have an enhanced entertainment value, yet most evaluation tools will measure performance as error rate or time spent on a task. Metrics such as user frustration, workload, and enjoyment have a limited use in evaluation studies because of their subjective nature. We will explore using fNIRs data as an objective measurement of these properties and apply them to the evaluation of non-WIMP interfaces.

Phase 4 focuses on creating new interactive, real-time user interfaces, which can adapt their behavior based on the brain measurement information we obtain. The design challenge will be to use this information in a subtle and judicious way, as an additional, lightweight input that could make a mouse or keyboard-driven interface more intuitive or efficient. We will draw on our prior work in designing eye movement-based interaction techniques, because of the strong parallels between the two HCI design problems.

If successful, our work will pave the way for a new and useful combination of fNIRs, HCI, and machine learning. We will use it to produce better ways to study, characterize, and measure user interfaces, and we will classify them within the theoretical framework of reality-based interaction. We will then produce new types of interfaces that can adapt to the user's workload profile or other brain activity in real time, and we will also develop technical improvements in applying fNIRs to realistic HCI settings and combining it with other sensors.