This handbook plays a fundamental role in sustainable progress in speech research and development. With an accessible format and with accompanying DVD-Rom, it targets three categories of readers: graduate students, professors and active researchers in academia, and engineers in industry who need to understand or implement some specific algorithms for their speech-related products. It is a superb source of application-oriented, authoritative and comprehensive information about these technologies, this work combines the established knowledge derived from research in such fast evolving disciplines as Signal Processing and Communications, Acoustics, Computer Science and Linguistics.
Tabla de materias
Foreword by J. L. Flanagan
Chap. 1 Introduction to Speech Processing
Part A: Production, Perception, and Modeling of Speech (M. M. Sondhi)
Part A describes the contemporary views on phonatory and articulatory mechanisms of humans to illustrate the physiological processes of speech production. It also describes the nonlinear cochlear speech processing in auditory masking, the perception of speech and sound by humans, and various methods for speech quality assessment with a focus on standardized methods.
Chap. 2 Physiological Processes of Speech Production
Chap. 3 Nonlinear Cochlear Signal Processing and Masking in Speech Perception
Chap. 4 Perception of Speech and Sound
Chap. 5 Speech Quality Estimation
Part B: Signal Processing for Speech (Y. Huang, J. Benesty)
Part B gives a large number of signal processing concepts and algorithms that are widely used in speech processing and in the applications of speech.
Chap. 6 Wiener and Adaptive Filters
Chap. 7 Linear Prediction
Chap. 8 Kalman Filter
Chap. 9 Homomorphic Systems and Cepstrum Analysis of Speech
Chap. 10 Pitch and Voicing Determination of Speech with an Extension Toward Music Signals
Chap. 11 Formant Estimation and Tracking
Chap. 12 The STFT, Sinusoidal Models, and Speech Modification
Chap. 13 Adaptive Blind Multichannel Identification
Part C: Speech Coding (W. B. Kleijn)
Part C discusses the attributes of speech coders as well as the underlying principles that determine their behavior and architecture. Coders for both traditional and packet networks are discussed, as well as low-bit-rate speech coding, various speech coding standards, and perceptual audio coders.
Chap. 14 Principles of Speech Coding
Chap. 15 Voice over IP: Speech Transmission over Packet Networks
Chap. 16 Low-Bit-Rate Speech Coding
Chap. 17 Analysis-by-Synthesis Speech Coding
Chap. 18 Perceptual Audio Coding of Speech Signals
Part D: Text-to-Speech Synthesis (S. Narayanan)
Part D presents different techniques for speech synthesis, including rule-based, corpus-based, and a combination of both. Linguistic analysis and prosodic processing, which are important parts of a text-to-speech (TTS) system, are reviewed. Other aspects of interest for TTS such as voice transformation and synthesis of expressive speech are also discussed.
Chap. 19 Basic Principles of Speech Synthesis
Chap. 20 Rule-Based Speech Synthesis
Chap. 21 Corpus-Based Speech Synthesis
Chap. 22 Linguistic Processing for Speech Synthesis
Chap. 23 Prosodic Processing
Chap. 24 Voice Transformation
Chap. 25 Expressive/Affective Speech Synthesis
Part E: Speech Recognition (L. Rabiner, B.-H. Juang)
Part E describes the most important speech recognition technologies. The approach based on the powerful hidden Markov models is generously presented and some other promising approaches are outlined. The robustness issues concerning the acoustical environment are studied. Finally, several fundamental applications are also discussed.
Chap. 26 Historical Perspective of the Field of ASR/NLU
Chap. 27 HMMs and Related Speech Technologies
Chap. 28 Speech Recognition with Weighted Finite-State Transducers
Chap. 29 A Machine Learning Framework for Spoken-Dialog Classification
Chap. 30 Towards Superhuman Speech Recognition
Chap. 31 Natural Language Understanding
Chap. 32 Transcription and Distillation of Spontaneous Speech
Chap. 33 Environmental Robustness
Chap. 34 The Business of Speech Technologies
Chap. 35 Spoken Dialog Systems
Part F: Speaker Recognition (S. Parthasarathy)
Part F develops the field of speaker recognition. It covers text-dependent and text-independent speaker recognition and their applications.
Chap. 36 Overview of Speaker Recognition
Chap. 37 Text-Dependent Speaker Recognition
Chap. 38 Text-Independent Speaker Recognition
Part G: Language Recognition (C.-H. Lee)
Part G provides an overview on principles of state-of-the-art language recognition approaches. Language characterization, identification, and modeling are addressed. Vector space characterization approaches to converting speech utterances into spoken document vectors for modeling and classification are also presented.
Chap. 39 Principle of Spoken Language Recognition
Chap. 40 Spoken Language Characterization
Chap. 41 Automatic Language Recognition via Spectral and Token Based Approaches
Chap. 42 Vector Based Spoken Language Classification
Part H: Speech Enhancement (J. Chen, S. Gannot, J. Benesty)
Part H develops all classical aspects of speech enhancement: noise reduction, dereverberation, echo cancellation, feedback control, and active noise control.
Chap. 43 Fundamentals of Noise Reduction
Chap. 44 Spectral Enhancement methods
Chap. 45 Echo Cancellation
Chap. 46 Dereverberation
Chap. 47 Adaptive Beamforming and Postfiltering
Chap. 48 Feedback Control in Hearing Aids
Chap. 49 Active Noise Control
Part I: Multichannel Speech Processing (J. Benesty, I. Cohen, Y. Huang)
Part I presents modern aspects of multichannel processing, for acoustic scene analysis, speech acquisition and presentation, when a large number of microphones and loudspeakers are available.
Chap. 50 Microphone Arrays
Chap. 51 Time Delay Estimation and Source Localization
Chap. 52 Convolutive Blind Source Separation Methods
Chap. 53 Sound Field Reproduction
About the Authors
Subject Index
Sobre el autor
Jacob Benesty
Jacob Benesty received the Masters degree in microwaves from Pierre & Marie Curie University, France, in 1987, and the Ph.D. degree in control and signal processing from Orsay University, France, in 1991. From January 1994 to July 1995, he worked at Telecom Paris University on multichannel adaptive filters and acoustic echo cancellation. From October 1995 to May 2003, he was first a Consultant and then a Member of the Technical Staff at Bell Laboratories, Murray Hill, NJ, USA. In May 2003, he joined the University of Quebec, INRS-EMT, in Montreal, Quebec, Canada, as a professor. His research interests are in signal processing, acoustic signal processing, and multimedia communications. Dr. Benesty received the 2001 Best Paper Award from the IEEE Signal Processing Society. He co-authored the books ‘Acoustic MIMO Signal Processing’ (Springer-Verlag, Berlin, 2006) and ‘Advances in Network and Acoustic Echo Cancellation’ (Springer-Verlag, Berlin, 2001). He is also a co-editor/co-author of four other books.
M. M. Sondhi
M. Mohan Sondhi is a consultant at Avaya Research Labs, Basking Ridge, New Jersey. Prior to joining Avaya he spent 39 years at Bell Labs, from where he retired in 2001. He holds undergraduate degrees in Physics and Electrical Communication Engineering, and M.S and Ph.D. degrees in Electrical Engineering. At Bell Labs he conducted research in speech signal processing, echo cancellation, acoustical inverse problems, speech recognition, articulatory models for analysis and synthesis of speech and modeling of auditory and visual processing by humans.
He has authored or co-authored several book chapters, over 120 journal articles, 10 patents, and the book Advances in Network and Acoustic Echo Cancellation. He has co-edited the book Advances in Speech Signal Processing. He has been a Distinguished Lecturer of the ASSP society, and an Associate Editor of Trans. ASSP, and has been a visiting scientist at laboratories in Sweden, France, and Japan. He has been on the editorial board of the Journal Speech Communication, and has co-edited a special issue of the Transactions of the IEEE on Speech and Audio Processing, He is a Bell Labs Fellow, a co-recipient of a best paper award of the IEEE ASSP society, and a co-recipient of IEEE’s E.E. Sumner award for 1998.
Y. Huang
Yiteng (Arden) Huang received the B.S. degree from the Tsinghua University in 1994, the M.S. and Ph.D. degrees from the Georgia Institute of Technology (Georgia Tech) in 1998 and 2001, respectively, all in electrical and computer engineering.
During his doctoral studies from 1998 to 2001, he was a research assistant with the Center of Signal and Image Processing, Georgia Tech, and was a teaching assistant with the School of Electrical and Computer Engineering, Georgia Tech. In the summers from 1998 to 2000, he worked with Bell Laboratories, Murray Hill, NJ and engaged in research on passive acoustic source localization with microphone arrays. Upon graduation, he joined Bell Laboratories as a Member of Technical Staff in March 2001. His current research interests are in acoustic signal processing and multimedia communications.
Dr. Huang is currently an Associated Editor of the EURASIP Journal on Applied Signal Processing. He is a member of the Signal Processing Theory and Methods and the Audio and Electroacoustics Technical Committees of the IEEE Signal Processing Society.