This unique contribution to the ongoing discussion of language
acquisition considers the Argument from the Poverty of the Stimulus
in language learning in the context of the wider debate over
cognitive, computational, and linguistic issues.
* Critically examines the Argument from the Poverty of the
Stimulus – the theory that the linguistic input which children
receive is insufficient to explain the rich and rapid development
of their knowledge of their first language(s) through general
learning mechanisms
* Focuses on formal learnability properties of the class of
natural languages, considered from the perspective of several
learning theoretic models
* The only current book length study of arguments for the poverty
of the stimulus which focuses on the computational learning
theoretic aspects of the problem
Tabella dei contenuti
Preface.
1 Introduction: Nativism in Linguistic Theory.
1.1 Historical Development.
1.2 The Rationalist-Empiricist Debate.
1.3 Nativism and Cognitive Modularity.
1.4 Connectionism, Nonmodularity, and Antinativism.
1.5 Adaptation and the Evolution of Natural Language.
1.6 Summary and Conclusions.
2 Clarifying the Argument from the Poverty of the Stimulus.
2.1 Formulating the APS.
2.2 Empiricist Learning versus Nativist Learning.
2.3 Our Version of the APS.
2.4 A Theory-Internal APS.
2.5 Evidence for the APS: Auxiliary Inversion as a Paradigm Case.
2.6 Debate on the PLD.
2.7 Learning Theory and Indispensable Data.
2.8 A Second Empirical Case: Anaphoric One.
2.9 Summary and Conclusions.
3 The Stimulus: Determining the Nature of Primary Linguistic Data.
3.1 Primary Linguistic Data.
3.2 Negative Evidence.
3.3 Semantic, Contextual, and Extralinguistic Evidence.
3.4 Prosodic Information.
3.5 Summary and Conclusions.
4 Learning in the Limit: The Gold Paradigm.
4.1 Formal Models of Language Acquisition.
4.2 Mathematical Models of Learnability.
4.3 The Gold Paradigm of Learnability.
4.4 Critique of the Positive-Evidence-Only APS in IIL.
4.5 Proper Positive Results.
4.6 Variants of the Gold Model.
4.7 Implications of Gold’s Results for Linguistic Nativism.
4.8 Summary and Conclusions.
5 Probabilistic Learning Theory for Language Acquisition.
5.1 Chomsky’s View of Statistical Learning.
5.2 Basic Assumptions of Statistical Learning Theory.
5.3 Learning Distributions.
5.4 Probabilistic Versions of the IIL Framework.
5.5 PAC Learning.
5.6 Consequences of PAC Learnability.
5.7 Problems with the Standard Model.
5.8 Summary and Conclusions.
6 A Formal Model of Indirect Negative Evidence.
6.1 Introduction.
6.2. From Low Probability to Ungrammaticality.
6.3 Modeling the DDA.
6.4 Applying the Functional Lower Bound.
6.5 Summary and Conclusions.
7 Computational Complexity and Efficient Learning.
7.1 Basic Concepts of Complexity
7.2 Efficient Learning.
7.3 Negative Results.
7.4 Interpreting Hardness Results.
7.5 Summary and Conclusions.
8 Positive Results in Efficient Learning.
8.1 Regular Languages.
8.2 Distributional Methods.
8.3 Distributional Learning of Context-Free Languages.
8.4 Lattice-Based Formalisms.
8.5 Arguments against Distributional Learning.
8.6 Summary and Conclusions.
9 Grammar Induction through Implemented Machine Learning.
9.1 Supervised Learning.
9.2Unsupervised Learning.
9.3 Summary and Conclusions.
10 Parameters in Linguistic Theory and Probabilistic Language Models.
10.1 Learnability of Parametric Models of Syntax.
10.2 UG Parameters and Language Variation.
10.3 Parameters in Probabilistic Language Models.
10.4 Inferring Constraints on Hypothesis Spaces with Hierarchical Bayesian Models.
10.5 Summary and Conclusions.
11 A Brief Look at Some Biological and Psychological Evidence.
11.1 Developmental Arguments.
11.2 Genetic Factors: Inherited Language Disorders.
11.3 Experimental Learning of Artificial Languages.
11.4 Summary and Conclusions.
12 Conclusion.
12.1 Summary.
12.2 Conclusions.
References.
Author Index.
Subject Index.
Circa l’autore
Alexander Clark is a Lecturer in the Department of Computer Science at Royal Holloway, University of London. He is the co-editor, with Chris Fox and Shalom Lappin, of The Handbook of Computational Linguistics and Natural Language Processing (Wiley-Blackwell, 2010).
Shalom Lappin is Professor of Computational Linguistics at King’s College, London. He is editor of The Handbook of Contemporary Semantic Theory (Wiley-Blackwell, 1996); co-author, with Chris Fox, of Foundations of Intensional Semantics (Wiley-Blackwell, 2005) and, with Alexander Clark and Chris Fox, co-editor of The Handbook of Computational Linguistics and Natural Language Processing (Wiley-Blackwell, 2010).