Markov Chain Monte Carlo (MCMC) methods are now an indispensable
tool in scientific computing. This book discusses recent
developments of MCMC methods with an emphasis on those making use
of past sample information during simulations. The application
examples are drawn from diverse fields such as bioinformatics,
machine learning, social science, combinatorial optimization, and
computational physics.
Key Features:
* Expanded coverage of the stochastic approximation Monte Carlo
and dynamic weighting algorithms that are essentially immune to
local trap problems.
* A detailed discussion of the Monte Carlo Metropolis-Hastings
algorithm that can be used for sampling from distributions with
intractable normalizing constants.
* Up-to-date accounts of recent developments of the Gibbs
sampler.
* Comprehensive overviews of the population-based MCMC algorithms
and the MCMC algorithms with adaptive proposals.
This book can be used as a textbook or a reference book for a
one-semester graduate course in statistics, computational biology,
engineering, and computer sciences. Applied or theoretical
researchers will also find this book beneficial.
Содержание
Preface
Acknowledgements
List of Figures
List of Tables
1 Bayesian Inference and Markov chain Monte Carlo
1.1 Bayes
1.2 Bayes output
1.3 Monte Carlo Integration
1.4 Random variable generation
1.5 Markov chain Monte Carlo
Exercises
2 The Gibbs sampler
2.1 The Gibbs sampler
2.2 Data Augmentation
2.3 Implementation strategies and acceleration methods
2.4 Applications
Exercises
3 The Metropolis-Hastings Algorithm
3.1 The Metropolis-Hastings Algorithm
3.2 Some Variants of the Metropolis-Hastings Algorithm
3.3 Reversible Jump MCMC Algorithm for Bayesian Model Selection
Problems
3.4 Metropolis-within-Gibbs Sampler for Ch IP-chip Data Analysis
Exercises
4 Auxiliary Variable MCMC Methods
4.1 Simulated Annealing
4.2 Simulated Tempering
4.3 Slice Sampler
4.4 The Swendsen-Wang Algorithm
4.5 The Wolff Algorithm
4.6 The Møller algorithm
4.7 The Exchange Algorithm
4.8 Double MH Sampler
4.9 Monte Carlo MH Sampler
4.10 Applications
Exercises
5 Population-Based MCMC Methods
5.1 Adaptive Direction Sampling
5.2 Conjugate Gradient Monte Carlo
5.3 Sample Metropolis-Hastings Algorithm
5.4 Parallel Tempering
5.5 Evolutionary Monte Carlo
5.6 Sequential Parallel Tempering for Simulation of High Dimensional
Systems
5.7 Equi-Energy Sampler
5.8 Applications
Forecasting
Exercises
6 Dynamic Weighting
6.1 Dynamic Weighting
6.2 Dynamically Weighted Importance Sampling
6.3 Monte Carlo Dynamically Weighted Importance Sampling
6.4 Sequentially Dynamically Weighted Importance Sampling
Exercises
7 Stochastic Approximation Monte Carlo
7.1 Multicanonical Monte Carlo
7.2 1/k-Ensemble Sampling
7.3 Wang-Landau Algorithm
7.4 Stochastic Approximation Monte Carlo
7.5 Applications of Stochastic Approximation Monte Carlo
7.6 Variants of Stochastic Approximation Monte Carlo
7.7 Theory of Stochastic Approximation Monte Carlo
7.8 Trajectory Averaging: Toward the Optimal Convergence Rate
Exercises
8 Markov Chain Monte Carlo with Adaptive Proposals
8.1 Stochastic Approximation-based Adaptive Algorithms
8.2 Adaptive Independent Metropolis-Hastings Algorithms
8.3 Regeneration-based Adaptive Algorithms
8.4 Population-based Adaptive Algorithms
Exercises
References
Index
Об авторе
Faming Liang, Associate Professor, Department of Statistics, Texas A&M University.
Chuanhai Liu, Professor, Department of Statistics, Purdue University.
Raymond J. Carroll, Distinguished Professor, Department of Statistics, Texas A&M University.