Selected outputs

Below, you may find selection of all outputs by the group. To see outputs by projects, go to Funding and select a project of interest.

Selected papers

  • Erosheva, E., Martinková, P., & Lee, C. J. (2021). When zero may not be zero: A cautionary note on the use of inter-rater reliability in evaluating grant peer review. Journal of the Royal Statistical Society — Series A, doi:10.1111/rssa.12681
  • Goldhaber D, Grout C, Wolff M, & Martinková P (2021). Evidence on the dimensionality and reliability of professional references' ratings of teacher applicants. Economics of Education Review, 83, 102130. doi:10.1016/j.econedurev.2021.102130
  • Kolek L, Šisler V, Martinková P, & Brom C. (2021). Can video games change attitudes towards history? Results from a laboratory experiment measuring short- and long-term effects. Journal of Computer Assisted Learning. In print. doi:10.1111/jcal.12575
  • Hladká, A., & Martinková, P. (2020). difNLR: Generalized logistic regression models for DIF and DDF detection. The R Journal, 12(1), 300—323, doi:10.32614/RJ-2020-014
  • Martinková, P., Hladká, A., & Potužníková, E. (2020). Is academic tracking related to gains in learning competence? Using propensity score matching and differential item change functioning analysis for better understanding of tracking implications. Learning and Instruction, 66, 101286, doi:10.1016/j.learninstruc.2019.101286
  • Bartoš, F., Martinková, P., & Brabec, M. (2020). Testing heterogeneity in inter-rater reliability. In M. Wiberg, D. Molenaar, J. González, U. Böckenholt, & J.-S. Kim (Eds.), Quantitative psychology (pp. 347—364). Cham: Springer International Publishing, doi:10.1007/978-3-030-43469-4_26
  • Štěpánek, L., Martinková, P. (2020). Feasibility of computerized adaptive testing evaluated by Monte-Carlo and post-hoc simulations. In Proceedings of the 2020 Federated Conference on Computer Science and Information Systems (FedCSIS), pp. 359-367, 10.15439/2020F197
  • Martinková, P., Goldhaber, D., & Erosheva, E. (2018) Disparities in ratings of internal and external applicants: A case for model-based inter-rater reliability. PLoS ONE, 13(10), e0203002, doi:10.1371/journal.pone.0203002
  • Martinková, P., & Drabinová, A. (2018) ShinyItemAnalysis for teaching psychometrics and to enforce routine analysis of educational tests. The R Journal, 10(2), 503—515, doi:10.32614/RJ-2018-074
  • Martinková, P., & Drabinová, A., Liaw, Y.-L., Sanders, E. A., McFarland, J. L., & Price, R. M. (2017) Checking equity: Why DIF analysis should be a routine part of developing conceptual assessments. CBE—Life Sciences Education, 16(2), rm2, doi:10.1187/cbe.16-10-0307
  • Martinková, P., Drabinová, A., & Houdek, J. (2017) ShinyItemAnalysis: Analýza přijímacích a jiných znalostních či psychologických testů [ShinyItemAnalysis: Analyzing admission and other educational and psychological tests. In Czech.] TESTFÓRUM, 9, 16—35, doi:10.5817/TF2017-9-129
  • Martinková, P., Štěpánek, L., Drabinová, A., Houdek, J., Vejražka, M., & Štuka, Č. (2017) Semi-real-time analyses of item characteristics for medical school admission tests. In Federated Conference on Computer Science and Information Systems (FedCSIS) (pp. 189—194), doi:10.15439/2017F380
  • Drabinová, A., & Martinková, P. (2017) Detection of differential item functioning with non-linear regression: Non-IRT approach accounting for guessing. Journal of Educational Measurement, 54(4), 498—517, doi:10.1111/jedm.12158