Theoretical foundations of computational psychometrics

Theoretical foundations of computational psychometrics is a Czech Science Foundation grant awarded to Patrícia Martinková for period 2021—2023. It focuses on theoretical and computational aspects of psychometrics with the aim to propose estimation and detection methods superior to traditional ones, as well as extensions to more complex designs. Project also provides software implementations together with simulated and real data examples demonstrating usefulness and superiority of the proposed methods.

Team members

Working with us

Openings for post-doc and student positions. Informal inquiries may be sent to

Manuscripts stemming from the project

  • Erosheva, E. A., Martinková, P., & Lee, C. J. (2021). When zero may not be zero: A cautionary note on the use of inter-rater reliability in evaluating grant peer review. Journal of the Royal Statistical Society — Series A, doi:10.1111/rssa.12681
  • Goldhaber D, Grout C, Wolff M, & Martinková P (2021). Evidence on the dimensionality and reliability of professional references' ratings of teacher applicants. Economics of Education Review, 83, 102130. doi:10.1016/j.econedurev.2021.102130
  • Kolek L, Šisler V, Martinková P, & Brom C. (2021). Can video games change attitudes towards history? Results from a laboratory experiment measuring short- and long-term effects. Journal of Computer Assisted Learning. In print. doi:10.1111/jcal.12575


Previous work we are building on

  • Hladká, A., & Martinková, P. (2020). difNLR: Generalized logistic regression models for DIF and DDF detection. The R Journal, 12(1), 300—323, doi:10.32614/RJ-2020-014
  • Martinková, P., Hladká, A., & Potužníková, E. (2020). Is academic tracking related to gains in learning competence? Using propensity score matching and differential item change functioning analysis for better understanding of tracking implications. Learning and Instruction, 66, 101286, doi:10.1016/j.learninstruc.2019.101286
  • Bartoš, F., Martinková, P., & Brabec, M. (2020). Testing heterogeneity in inter-rater reliability. In M. Wiberg, D. Molenaar, J. González, U. Böckenholt, & J.-S. Kim (Eds.), Quantitative psychology (pp. 347—364). Cham: Springer International Publishing, doi:10.1007/978-3-030-43469-4_26
  • Štěpánek, L., Martinková, P. (2020). Feasibility of computerized adaptive testing evaluated by Monte-Carlo and post-hoc simulations. In Proceedings of the 2020 Federated Conference on Computer Science and Information Systems (FedCSIS), pp. 359-367, doi:10.15439/2020F197
  • Martinková, P., Goldhaber, D., & Erosheva, E. (2018) Disparities in ratings of internal and external applicants: A case for model-based inter-rater reliability. PLoS ONE, 13(10), e0203002, doi:10.1371/journal.pone.0203002
  • Martinková, P., & Drabinová, A. (2018) ShinyItemAnalysis for teaching psychometrics and to enforce routine analysis of educational tests. The R Journal, 10(2), 503—515, doi:10.32614/RJ-2018-074
  • Drabinová, A., & Martinková, P. (2017) Detection of differential item functioning with non-linear regression: Non-IRT approach accounting for guessing. Journal of Educational Measurement, 54(4), 498—517, doi:10.1111/jedm.12158
  • Martinková, P., & Drabinová, A., Liaw, Y.-L., Sanders, E. A., McFarland, J. L., & Price, R. M. (2017) Checking equity: Why DIF analysis should be a routine part of developing conceptual assessments. CBE—Life Sciences Education, 16(2), rm2, doi:10.1187/cbe.16-10-0307