The goal of this guide and manual is to provide a practical and brief overview of the theory on computerized adaptive testing (CAT) and multistage testing (MST) and to illustrate the methodologies and applications using R open source language and several data examples. Implementation relies on the R packages cat R and mst R that have been already or are being developed by the first author (with the team) and that include some of the newest research algorithms on the topic.
The book covers many topics along with the R-code: the basics of R, theoretical overview of CAT and MST, CAT designs, CAT assembly methodologies, CAT simulations, cat R package, CAT applications, MST designs, IRT-based MST methodologies, tree-based MST methodologies, mst R package, and MST applications. CAT has been used in many large-scale assessments over recent decades, and MST has become very popular in recent years. R open source language also has become one of the most useful tools for applications in almost all fields, including business and education.
Though very useful and popular, R is a difficult language to learn, with a steep learning curve. Given the obvious need for but with the complex implementation of CAT and MST, it is very difficult for users to simulate or implement CAT and MST. Until this manual, there has been no book for users to design and use CAT and MST easily and without expense; i.e., by using the free R software. All examples and illustrations are generated using predefined scripts in R language, available for free download from the book’s website.
Table of Content
Foreword.- Preface.- Ch 1 Overview of Adaptive Testing.- Ch 2 An Overview of Item Response Theory.- Part 1 Item-Level Computerized Adaptive Testing.- Ch 3 An Overview of Computerized Adaptive Testing.- Ch 4 Simulations of Computerized Adaptive Tests.- Ch 5 Examples of Simulations using cat R.- Part 2 Computerized Multistage Testing.- Ch 6 An Overview of Computerized Multistage testing.- Ch 7 Simulations of Computerized Multistage Tests.- Ch 8 Examples of Simulations using mst R.- Index.
About the author
David Magis, Ph D, is Research Associate of the “Fonds de la Recherche Scientifique – FNRS” at the Department of Psychology, University of Liège, Belgium. His specialization is statistical methods in psychometrics, with special interest in item response theory, differential item functioning and computerized adaptive testing. His research interests include both theoretical and methodological development as well as open source implementation and dissemination in R. He is the main developer and maintainer of the packages cat R and mst R, among others.
Duanli Yan, Ph D, is Manager of Data Analysis and Computational Research for Automated Scoring group in the Research and Development division at the Educational Testing Service (ETS). She is also an Adjunct Professor at Rutgers University. At ETS, Dr. Yan’s responsibilities include the EXADEP™ test, the TOEIC® Institutional programs, and automated scoring engines upgrade and scoring. She has been a statistical coordinator and a Psychometrician for several operational programs and a Development Scientist for innovative research applications. Dr. Yan received many awards including the 2011 ETS Presidential Award, the 2013 NCME Brenda Lyod award, the 2015 IACAT Early Career Award, and 2016 AERA Significant Contribution to Educational Measurement and Research Methodology Award. She is a co-author for Bayesian Networks in Educational Assessment and a co-editor for Computerized Multistage Testing: Theory and Applications.
Alina A. von Davier, Ph D, is Vice-President at ACTNext and an Adjunct Professor at Fordham University. She was also Senior Research Director of the Computational Psychometrics Research Center at Educational Testing Service (ETS), where she was responsible for developing a team of experts and a psychometric research agenda in support of next generation assessments. Computational psychometrics, which include machine learning and data mining techniques, Bayesian inference methods, stochastic processes and psychometric models are the main set of tools employed in her current work. She also works with psychometric models applied to educational testing: test score equating methods, item response theory models, and adaptive testing.