The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces: user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces that often include biosignals. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This second volume of the handbook begins with multimodal signal processing, architectures, and machine learning. It includes recent deep learning approaches for processing multisensorial and multimodal user data and interaction, as well as context-sensitivity. A further highlight is processing of information about users’ states and traits, an exciting emerging capability in next-generation user interfaces. These chapters discuss real-time multimodal analysis of emotion and social signals from various modalities, and perception of affective expression by users. Further chapters discuss multimodal processing of cognitive state using behavioral and physiological signals to detect cognitive load, domain expertise, deception, and depression. This collection of chapters provides walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this rapidly expanding field. In the final section of this volume, experts exchange views on the timely and controversial challenge topic of multimodal deep learning. The discussion focuses on how multimodal-multisensor interfaces are most likely to advance human performance during the next decade.
表中的内容
Table of Contents: 1. Multimodal Machine Learning / 2. Classifying Multimodal Data / 3. Learning for Multimodal and Context-sensitive Interfaces / 4. Deep Learning for Multisensorial and Multimodal Interaction / 5. Multimodal User State and Trait Recognition / 6. Multimodal-Multisensor Affect Detection / 7. Multimodal Analysis of Social Signals / 8. Real-time Sensing of Affect and Social Signals in a Multimodal Framwork / 9. How do Users Perceive Multimodal Expressions of Affects? / 10. Multimodal Behavior and Physiological Signals as Indicators of Cognitive Load / 11. Multimodal Learning Analytics / 12. Multimodal Assessment of Depression and Related Disorders Based on Behavioral Signals / 13. Multimodal Deception Detection / 14. Perspectives on Strategic Fusion / 15. Perspectives on Predictive Power of Multimodal Deep Learning
关于作者
Antonio Krüger (Saarland University and DFKI Gmb H) is professor of Computer Science and Director of the Media Informatics Program at Saarland University, as well as Scientific Director of the Innovative Retail Laboratory at the German Research Center for Artificial Intelligence (DFKI). His research areas focus on intelligent user interfaces, and mobile and ubiquitous context-aware systems. He has been General Chair of the Ubiquitous Computing Conference, and Program Chair of Mobile HCI, IUI, and Pervasive Computing. He also is the Steering Committee Chair of Intelligent User Interfaces (IUI), and an Associate Editor of the journals User Modeling and User-Adapted Interaction and ACM Interactive, Mobile, Wearable and Ubiquitous Technologies.