Theoretical foundations of computational psychometrics

Theoretical foundations of computational psychometrics is a Czech Science Foundation grant awarded to Patrícia Martinková for period 2021—2023. It focuses on theoretical and computational aspects of psychometrics with the aim to propose estimation and detection methods superior to traditional ones, as well as extensions to more complex designs. Project also provides software implementations together with simulated and real data examples demonstrating usefulness and superiority of the proposed methods.

Team members

Working with us

Openings for post-doc and student positions. Informal inquiries may be sent to

Manuscripts stemming from the project

  • Martinková, P. & Hladká, A. (2023). Computational Aspects of Psychometric Methods: With R. Chapman and Hall/CRC. doi:10.1201/9781003054313
  • Martinková, P., Bartoš, F., & Brabec, M. (2023). Assessing inter-rater reliability with heterogeneous variance components models: Flexible approach accounting for contextual variables. Journal of Educational and Behavioral Statistics. doi:10.3102/10769986221150517. arXiv:2207.02071
  • Hladká, A., Martinková, P., & Magis, D. (2023). Combining item purification and multiple comparison adjustment methods in detection of differential item functioning. Multivariate Behavioral Research. doi:10.1080/00273171.2023.2205393
  • Štěpánek, L., & Dlouhá, J., & Martinková, P. (2023). Item Difficulty Prediction Using Item Text Features: Comparison of Predictive Performance across Machine-Learning Algorithms. Mathematics, 11(9) doi:10.3390/math11194104
  • Hladká, A., Martinková, P., & Brabec, M. (2023). Parameter estimation in generalised logistic model with application to DIF detection. arXiv. doi:10.48550/arXiv.2302.12648
  • Bartoš, F. & Martinková, P. (2023). Assessing quality of selection procedures: Lower bound of false positive rate as a function of inter-rater reliability. arXiv. doi:10.48550/arXiv.2207.09101
  • Erosheva, E. A., Martinková, P., & Lee, C. J. (2021). When zero may not be zero: A cautionary note on the use of inter-rater reliability in evaluating grant peer review. Journal of the Royal Statistical Society — Series A, doi:10.1111/rssa.12681
  • Goldhaber, D., Grout, C., Wolff, M., & Martinková, P. (2021). Evidence on the dimensionality and reliability of professional references' ratings of teacher applicants. Economics of Education Review, 83, 102130. doi:10.1016/j.econedurev.2021.102130
  • Kolek, L., Šisler, V., Martinková, P., & Brom, C. (2021). Can video games change attitudes towards history? Results from a laboratory experiment measuring short- and long-term effects. Journal of Computer Assisted Learning. In print. doi:10.1111/jcal.12575


Selected conference presentations

  • Martinková, P., Bartoš, F., & Brabec, M. (2022) Computational aspects of reliability estimation, IMPS 2022 Spotlight talk
  • Štěpánek, L., Dlouhá, J., & Martinková, P. (2022) Machine-learning methods for item difficulty prediction using item text features, IMPS 2022
  • Dlouhá, J., Štěpánek, L., & Martinková, P. (2022) Item difficulty prediction using computational psychometrics and linguistic algorithms, IMPS 2022
  • Netík, J., & Martinková, P. (2022) Revisiting parametrizations for the nominal response model, IMPS 2022
  • Martinková, P. (2021). Computational aspects of psychometrics taught with R and Shiny, useR!2021
  • Martinková, P., Bartoš, F., Brabec, M. (2021). Inter-rater reliability in complex situations, IMPS 2021
  • Hladká, A. Martinková, P., Brabec, M. (2021). Estimation in generalized logistic regression models for DIF detection, IMPS 2021
  • Martinková, P. (2021). Does a zero inter-rater reliability mean grant peer review is arbitrary? Metascience 2021

Previous work we are building on

  • Hladká, A., & Martinková, P. (2020). difNLR: Generalized logistic regression models for DIF and DDF detection. The R Journal, 12(1), 300—323, doi:10.32614/RJ-2020-014
  • Martinková, P., Hladká, A., & Potužníková, E. (2020). Is academic tracking related to gains in learning competence? Using propensity score matching and differential item change functioning analysis for better understanding of tracking implications. Learning and Instruction, 66, 101286, doi:10.1016/j.learninstruc.2019.101286
  • Bartoš, F., Martinková, P., & Brabec, M. (2020). Testing heterogeneity in inter-rater reliability. In M. Wiberg, D. Molenaar, J. González, U. Böckenholt, & J.-S. Kim (Eds.), Quantitative psychology (pp. 347—364). Cham: Springer International Publishing, doi:10.1007/978-3-030-43469-4_26
  • Štěpánek, L., Martinková, P. (2020). Feasibility of computerized adaptive testing evaluated by Monte-Carlo and post-hoc simulations. In Proceedings of the 2020 Federated Conference on Computer Science and Information Systems (FedCSIS), pp. 359-367, doi:10.15439/2020F197
  • Martinková, P., Goldhaber, D., & Erosheva, E. (2018) Disparities in ratings of internal and external applicants: A case for model-based inter-rater reliability. PLoS ONE, 13(10), e0203002, doi:10.1371/journal.pone.0203002
  • Martinková, P., & Drabinová, A. (2018) ShinyItemAnalysis for teaching psychometrics and to enforce routine analysis of educational tests. The R Journal, 10(2), 503—515, doi:10.32614/RJ-2018-074
  • Drabinová, A., & Martinková, P. (2017) Detection of differential item functioning with non-linear regression: Non-IRT approach accounting for guessing. Journal of Educational Measurement, 54(4), 498—517, doi:10.1111/jedm.12158
  • Martinková, P., & Drabinová, A., Liaw, Y.-L., Sanders, E. A., McFarland, J. L., & Price, R. M. (2017) Checking equity: Why DIF analysis should be a routine part of developing conceptual assessments. CBE—Life Sciences Education, 16(2), rm2, doi:10.1187/cbe.16-10-0307