The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces-user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces.
This three-volume handbook is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas.
This third volume focuses on state-of-the-art multimodal language and dialogue processing, including semantic integration of modalities. The development of increasingly expressive embodied agents and robots has become an active test bed for coordinating multimodal dialogue input and output, including processing of language and nonverbal communication. In addition, major application areas are featured for commercializing multimodal-multisensor systems, including automotive, robotic, manufacturing, machine translation, banking, communications, and others. These systems rely heavily on software tools, data resources, and international standards to facilitate their development. For insights into the future, emerging multimodal-multisensor technology trends are highlighted in medicine, robotics, interaction with smart spaces, and similar areas. Finally, this volume discusses the societal impact of more widespread adoption of these systems, such as privacy risks and how to mitigate them. The handbook chapters provide a number of walk-through examples of system design and processing, information on practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces need to be equipped to most effectively advance human performance during the next decade.
Table of Content
- Preface
- Figure Credits
- Introduction: Toward the Design, Construction, and Deployment of Multimodal-Multisensor Interfaces
- MULTIMODAL LANGUAGE AND DIALOGUE PROCESSING
- Multimodal Integration for Interactive Conversational Systems
- Multimodal Conversational Interaction with Robots
- Situated Interaction
- Software Platforms and Toolkits for Building Multimodal Systems and Applications
- Challenge Discussion: Advancing Multimodal Dialogue
- Nonverbal Behavior in Multimodal Performances
- MULTIMODAL BEHAVIOR
- Ergonomics for the Design of Multimodal Interfaces
- Early Integration for Movement Modeling in Latent Spaces
- Standardized Representations and Markup Languages for Multimodal Interaction
- Multimodal Databases
- EMERGING TRENDS AND APPLICATIONS
- Medical and Health Systems
- Automotive Multimodal Human-Machine Interface
- Embedded Multimodal Interfaces in Robotics: Applications, Future Trends, and Societal Implications
- Multimodal Dialogue Processing for Machine Translation
- Commercialization of Multimodal Systems
- Privacy Concerns of Multimodal Sensor Systems
- Index
- Biographies
- Volume 3 Glossary
About the author
Antonio Krüger (Saarland University and DFKI Gmb H) is professor of Computer Science and Director of the Media Informatics Program at Saarland University, as well as Scientific Director of the Innovative Retail Laboratory at the German Research Center for Artificial Intelligence (DFKI). His research areas focus on intelligent user interfaces, and mobile and ubiquitous context-aware systems. He has been General Chair of the Ubiquitous Computing Conference, and Program Chair of Mobile HCI, IUI, and Pervasive Computing. He also is the Steering Committee Chair of Intelligent User Interfaces (IUI), and an Associate Editor of the journals User Modeling and User-Adapted Interaction and ACM Interactive, Mobile, Wearable and Ubiquitous Technologies.