I primarily conduct research in the field of Human-Computer Interaction, with a particular focus on cognitive state prediction from behavioral data. Adopting a holistic perspective, I investigate (1) methods to predict the cognitive state of a user, and (2) adaptation strategies that use the predicted state to dynamically adapt the user interface.
Adaptation strategies for multimodal user interfaces
Computing devices have pervaded almost every aspect of our lives. And while a particular communication channel may be useful in one situation, it can be rather impractical in another context. For example, text input is inconvenient when the hands are needed for another task. On the other hand, most users feel uncomfortable sending and receiving audio messages in public spaces. Multimodal interfaces allow users to choose between multiple interaction modalities, but typically give output in a default modality as specified in the application settings.
In my research, I examine strategies and rules that specify how information should be presented given a users’ situation and state. The objective is to improve the user experience by dynamically adapting either the presented content or its modality.