This volume contains the papers presented at the 15th International Symposium on Hearing (ISH), which was held at the Hotel Regio, Santa Marta de Tormes, Salamanca, Spain, between 1st and 5th June 2009. Since its inception in 1969, this Symposium has been a forum of excellence for debating the neurophysiological basis of auditory perception, with computational models as tools to test and unify physiological and perceptual theories. Every paper in this symposium includes two of the following: auditory physiology, psychoph- ics or modeling. The topics range from cochlear physiology to auditory attention and learning. While the symposium is always hosted by European countries, p- ticipants come from all over the world and are among the leaders in their fields. The result is an outstanding symposium, which has been described by some as a “world summit of auditory research. ” The current volume has a bottom-up structure from “simpler” physiological to more “complex” perceptual phenomena and follows the order of presentations at the meeting. Parts I to III are dedicated to information processing in the peripheral au- tory system and its implications for auditory masking, spectral processing, and c- ing. Part IV focuses on the physiological bases of pitch and timbre perception. Part V is dedicated to binaural hearing. Parts VI and VII cover recent advances in und- standing speech processing and perception and auditory scene analysis. Part VIII focuses on the neurophysiological bases of novelty detection, attention, and learning.
विषयसूची
Contents
Part I Cochlea/Peripheral Processing
1 Influence of Neural Synchrony on the Compound Action Potential,
Masking, and the Discrimination of Harmonic Complexes
2 A Nonlinear Auditory Filterbank Controlled by Sub-band Instantaneous
Frequency Estimates
3 Estimates of Tuning of Auditory Filter Using Simultaneous
and Forward Notched-noise
4 A Model of Ventral Cochlear Nucleus Units Based on First Order
5 The Effect of Reverberation on the Temporal Representation
of the F0 of Frequency Swept Harmonic Complexes
in the Ventral Cochlear Nucleus
6 Spectral Edges as Optimal Stimuli for the Dorsal Cochlear
7 Psychophysical and Physiological Assessment of the Representation
of High-frequency Spectral Notches in the Auditory Nerve
Part II Pitch
8 Spatio-Temporal Representation of the Pitch of Complex Tones
in the Auditory
9 Virtual Pitch in a Computational Physiological
10 Searching for a Pitch Centre in Human Auditory
11 Imaging Temporal Pitch Processing in the Auditory Pathway
Part III Modulation
12 Spatiotemporal Encoding of Vowels in Noise Studied with
the Responses of Individual Auditory-Nerve
13 Role of Peripheral Nonlinearities in Comodulation Masking
14 Neuromagnetic Representation of Comodulation Masking Release
in the Human Auditory
15 Psychophysically Driven Studies of Responses to Amplitude
Modulation in the Inferior Colliculus: Comparing Single-Unit
Physiology to Behavioral
16 Source Segregation Based on Temporal Envelope Structure
and Binaural
17 Simulation of Oscillating Neurons in the Cochlear Nucleus:
A Possible Role for Neural Nets, Onset Cells, and Synaptic
18 Forward Masking: Temporal Integration or Adaptation?
19 The Time Course of Listening
Part IV Animal Communication
20 Frogs Communicate with Ultrasound in Noisy Environments
21 The Olivocochlear System Takes Part in Audio-Vocal Interaction
22 Neural Representation of Frequency Resolution in the Mouse
Auditory Midbrain
23 Behavioral and Neural Identification of Birdsong under Several
Masking Conditions
Part V Intensity Representation
24 Near-Threshold Auditory Evoked Fields and Potentials are In Line
with the Weber-Fechner Law
25 Brain Activation in Relation to Sound Intensity and Loudness
26 Duration Dependency of Spectral Loudness Summation, Measured
with Three Different Experimental Procedures
Part VI Scene Analysis
27 The Correlative Brain: A Stream Segregation Model
28 Primary Auditory Cortical Responses while Attending
to Different Streams
29 Hearing Out Repeating Elements in Randomly Varying Multitone
Sequences: A Case of Streaming?
30 The Dynamics of Auditory Streaming: Psychophysics, Neuroimaging,
and Modeling
31 Auditory Stream Segregation Based on Speaker Size, and Identification
of Size-Modulated Vowel Sequences
32 Auditory Scene Analysis: A Prerequisite for Loudness Perception
33 Modulation Detection Interference as Informational Masking
34 A Paradoxical Aspect of Auditory Change Detection
35 Human Auditory Cortical Processing of Transitions Between
‘Order’ and ‘Disorder’
36 Wideband Inhibition Modulates the Effect of Onset Asynchrony
as a Grouping Cue
37 Discriminability of Statistically Independent Gaussian Noise Tokens
and Random Tone-Burst Complexes
38 The Role of Rehearsal and Lateralization in Pitch Memory
Part VII Binaural Hearing
39 Interaural Correlation and Loudness
40 Interaural Phase and Level Fluctuations as the Basis of Interaural
Incoherence Detection
41 Logarithmic Scaling of Interaural Cross Correlation: A Model Based
on Evidence from Psychophysics and EEG
42 A Physiologically-Based Population Rate Code for Interaural Time
Differences (ITDs) Predicts Bandwidth-Dependent Lateralization
43 A p-Limit for Coding ITDs: Neural Responses and the Binaural Display
44 A p-Limit for Coding ITDs: Implications for Binaural Models
45 Strategies for Encoding ITD in the Chicken Nucleus Laminaris
46 Interaural Level Difference Discrimination Thresholds and Virtual
Acoustic Space Minimum Audible Angles for Single Neurons in the
Lateral Superior Olive
47 Responses in Inferior Colliculus to Dichotic Harmonic Stimuli:
The Binaural Integration of Pitch Cues
48 Level Dependent Shifts in Auditory Nerve Phase Locking Underlie
Changes in Interaural Time Sensitivity with Interaural Level
Differences in the Inferior Colliculus
49 Remote Masking and the Binaural Masking-Level Difference
50 Perceptual and Physiological Characteristics of Binaural
Sluggishness
51 Precedence-Effect with Cochlear Implant Simulation
52 Enhanced Processing of Interaural Temporal Disparities at
High-Frequencies: Beyond Transposed Stimuli
53 Models of Neural Responses to Bilateral Electrical Stimulation
54 Neural and Behavioral Sensitivities to Azimuth Degrade with Distance
in Reverberant Environments
Part VIII Speech and Learning
55 Spectro-temporal Processing of Speech – An Information-Theoretic
Framework
56 Articulation Index and Shannon Mutual Information
57 Perceptual Compensation for Reverberation: Effects of
‘Noise-Like’ and ‘Tonal’ Contexts
58 Towards Predicting Consonant Confusions of Degraded Speech
59 The Influence of Masker Type on the Binaural Intelligibility
Level
Index
लेखक के बारे में
Enrique A. Lopez-Poveda, Ph.D. is director of the Auditory Computation and Psychoacoustics Unit of the Neuroscience Institute of Castilla y León (University of Salamanca, Spain). His research focuses on understanding and modeling human cochlear nonlinear signal processing and the role of the peripheral auditory system in normal and impaired auditory perception. He has authored over 45 scientific papers and book chapters and is co-editor of the book Computational Models of the Auditory System (Springer Handbook of Auditory Research). He has been principal investigator, participant and consultant on numerous research projects. He is member of the Acoustical Society of America and of the Association of Research in Otolaryngololgy.
Alan R. Palmer, Ph.D. is Deputy Director of the MRC Institute of Hearing Research and holds a Special Professorship in neuroscience at the University of Nottingham UK. He received his first degree in Biological Sciences from the University of Birmingham UK and his Ph D in Communication and Neuroscience from the University of Keele UK. After postdoctoral research at Keele, he established his own laboratory at the National Institute for Medical Research in London. This was followed by a Royal Society University Research Fellowship at the University of Sussex before taking a program leader position at the Medical Research Council Institute for Hearing Research in 1986. He heads a research team that uses neurophysiological, computational and neuroanatomical techniques to study the way the brain processes sound.
Ray Meddis, Ph.D. is director of the Hearing Research Laboratory at the University of Essex, England. His research has concentrated on the development of computer models of the physiology of the auditory periphery and how these can be incorporated into models of psychophysical phenomena such as pitch and auditory scene analysis. He has published extensively inthis area. He is co-editor of the book Computational Models of the Auditory System (Springer Handbook of Auditory Research). His current research concerns the application of computer models to an understanding of hearing impairment. He is a fellow of the Acoustical Society of America and a member of the Association of Research in Otolaryngololgy.