Seminar in Psychometrics

Seminar of the COMputational PSychometrics group takes place on ZOOM on Tuesdays from 3:40 PM CET, and is co-hosted by Patrícia Martinková and Jiří Lukavský. The talks are approximately 60 minutes long, followed by a discussion. Seminar is jointly held as a course NMST571 at Charles University. If you want to participate and/or be added on a mailing list, please send an e-mail to martinkovaATcs.cas.cz.

Future sessions

May 11, 2021. Ed Merkle (University of Missouri): Recent progress on Bayesian structural equation models

Abstract. The talk will be about research and developments surrounding the R package blavaan. Specific topics include strategies for speeding up model estimation, methods for computing model information criteria, and extensions to complex models. I will try to discuss the research in the context of open science and reproducibility, which has been a theme of the software development. I will also provide some demonstrations along the way to illustrate the functionality of blavaan.



Past sessions

March 16, 2021. Patrícia Martinková (ICS CAS & Charles University): Computational aspects of reliability estimation
March 23, 2021. Hynek Cígler (Masaryk University): Reliability models in classical test and latent trait theories
March 30, 2021. František Bartoš (University of Amsterdam & ICS CAS): Robust Bayesian meta-analysis: A framework for addressing publication bias with model-averaging

Abstract. Publication bias poses a significant threat to meta-analyses - the gold standard of evidence. To alleviate the problem, a variety of publication bias adjustment methods was suggested. However, it is nearly impossible to select the correct method when the data generating process is unknown, which is usually the case, since no existing method performs well in a wide range of conditions. To address this issue, we developed a Robust Bayesian meta-analysis (RoBMA) framework. RoBMA allows us to combine different publication bias adjustment models in a coherent Bayesian way. Apart from obtaining the model-averaged estimates, RoBMA provides Bayes factor tests for presence or absence of the meta-analytic effect, heterogeneity, and publication bias. In this talk, I provide a conceptual introduction to Bayesian model-averaging in the context of meta-analyses, illustrate the RoBMA framework on an applied example, and demonstrate the performance of the method on real and simulated datasets.

April 13, 2021. Michela Battauz (University of Udine): Item Response Theory Equating Methods for Multiple Forms

Abstract. Many testing programs use multiple forms of a test to deal with the security issues connected to test disclosure. However, since each form is composed of different items, the test scores are not comparable. To overcome this issue, it is possible to apply the statistical procedure of equating. This talk focuses on Item Response Theory (IRT) equating methods for the common-item nonequivalent group design. Under this design, the forms have a set of items in common and they are administered to different groups of examinees. The equating process consists in the conversion of the item parameter estimates to a common scale using a linear transformation, and the determination of comparable test scores. The coefficients of this linear function are called equating coefficients. Despite many testing programs use several forms of a test, the equating methods proposed in the literature mainly consider only two test forms. In this talk, the equating methods for two test forms will be reviewed and some newer methods for equating multiple test forms will be presented. The methods will be illustrated using the R packages equateIRT and equateMultiple.

April 20, 2021. Marie Wiberg (University of Umea): How to evaluate different equating methods

Abstract. Test score equating is used to make scores from one scale comparable with the scores from another scale. There are a large number of equating methods available depending on how data is collected and what assumptions are made. The talk starts with a brief overview of available equating methods. As there are a large number of equating methods developed for different situations and different tests we need tools to evaluate and compare the different equating transformations. There are a large number of methods and measures proposed to evaluate an equating transformation. In general they can be divided into two groups; equating specific measures and statistical measures. In this talk I will discuss several methods and illustrate them with some examples in R.

April 27, 2021. Gabriel Wallin (Université Côte d'Azur & Inria): Equating nonequivalent test groups using propensity scores

Abstract. For standardized assessment tests, scores from different test administrations are comparable only after the statistical process of equating. In this talk I will discuss equating of test scores when the test groups differ in their ability distributions. The equating procedures, constructed to only adjust scores due to differences in difficulty level of the test forms, thus risk to also adjust for the ability differences. The gold standard for this situation is to utilize a set of common items in the equating procedure. However, not all testing programs have common items available. This presentation considers this setting. In the absence of common items, background information about the test-takers will be gathered in a scalar function known as the propensity score, and the test forms will be equated with respect to this score. This method will be demonstrated using both empirical and simulated data.

May 4, 2021. Jiří Lukavský (Institue of psychology CAS & Charles University): Bayesian psychometrics.