Mapping the Landscape of Rubric-Based Assessment Studies: A Bibliometric Review
Main Article Content
Abstract
This study aims to map the intellectual structure and research trends of rubric-based assessment through a comprehensive bibliometric analysis. Using data retrieved from the Scopus database, this study examines publications from 2000 to 2025 to identify the evolution, key contributors, and thematic development of research in this field. Bibliometric techniques were employed using VOSviewer to analyze co-authorship networks, keyword co-occurrence, and thematic clustering. The results reveal that rubric-based assessment research has developed into three major domains: pedagogical application, applied professional education, and psychometric validation. Early studies focused on establishing validity, reliability, and measurement rigor, while subsequent research emphasized instructional integration, formative assessment, and competency-based learning. More recent trends indicate a growing interest in the integration of artificial intelligence, large language models, and digital learning systems in rubric-based assessment practices. The findings also show that while traditional themes such as teaching and educational measurement remain dominant, emerging technology-driven approaches are shaping future research directions. This study contributes to the literature by providing a systematic overview of the knowledge structure and evolution of rubric-based assessment research, offering insights for researchers and educators in developing more effective, adaptive, and technology-enhanced assessment frameworks.
Article Details

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
References
[1] D. Alonzo, J. Leverett, and E. Obsioma, “Leading an assessment reform: Ensuring a whole-school approach for decision-making,” in Frontiers in Education, 2021, vol. 6, p. 631857.
[2] C. Gonsalves and Z. Lin, “Clear in advance to whom? Exploring ‘transparency’of assessment practices in UK higher education institution assessment policy,” Stud. High. Educ., vol. 50, no. 7, pp. 1454–1470, 2025.
[3] M. R. Ahmed and M. A. Sidiq, “Evaluating online assessment strategies: A systematic review of reliability and validity in e-learning environments,” North Am. Acad. Res., vol. 6, no. 12, pp. 1–18, 2023.
[4] P. Gupta and D. Mehrotra, “Role of Adaptable Rubrics in Modern Education: A Systematic Review,” in International Conference on Entrepreneurship, Innovation, and Leadership, 2024, pp. 313–328.
[5] J. S. Winters, Teacher perceptions of using standards-based rubrics for monitoring student growth in teacher evaluation. Walden University, 2021.
[6] T. Stanley, Using rubrics for performance-based assessment: A practical guide to evaluating student work. Routledge, 2021.
[7] L. D. Bommanaboina and R. D. Bommanaboina, “Enhancing ESL Preservice Teachers’ Formative Assessment Skills: Developing Reflective Analytic Rubrics for Language Assessment Tasks.,” Int. J. Humanit. Educ., vol. 23, no. 1, 2025.
[8] Z. Homayounzadeh, M. Bavali, and F. Behjat, “Rubric-based dynamic assessment and multimodal feedback: A transformative model for doctoral supervision,” J. Acad. Lang. Learn., vol. 19, no. 2, pp. 71–102, 2025.
[9] J. R. Grohs, G. R. Kirk, M. M. Soledad, and D. B. Knight, “Assessing systems thinking: A tool to measure complex reasoning through ill-structured problems,” Think. Ski. Creat., vol. 28, pp. 110–130, 2018.
[10] T. Reay, W. Berta, and M. K. Kohn, “What’s the evidence on evidence-based management?,” Acad. Manag. Perspect., vol. 23, no. 4, pp. 5–18, 2009.
[11] M. Oakleaf, “Using rubrics to assess information literacy: An examination of methodology and interrater reliability,” J. Am. Soc. Inf. Sci. Technol., vol. 60, no. 5, pp. 969–983, 2009.
[12] A. Singh, S. Karayev, K. Gutowski, and P. Abbeel, “Gradescope: a fast, flexible, and fair system for scalable assessment of handwritten work,” in Proceedings of the fourth (2017) acm conference on learning@ scale, 2017, pp. 81–88.
[13] L. A. Knight, “Using rubrics to assess information literacy,” Ref. Serv. Rev., vol. 34, no. 1, pp. 43–55, 2006.
[14] C. Nwachukwu, N. Lachman, and W. Pawlina, “Evaluating dissection in the gross anatomy course: Correlation between quality of laboratory dissection and students outcomes,” Anat. Sci. Educ., vol. 8, no. 1, pp. 45–52, 2015.
[15] M. L. Cato, K. Lasater, and A. I. Peeples, “NURSING STUDENTS’SELF-ASSESSMENT of Their Simulation Experiences,” Nurs. Educ. Perspect., vol. 30, no. 2, pp. 105–108, 2009.
[16] S. El-Den, T. F. Chen, R. J. Moles, and C. O’Reilly, “Assessing mental health first aid skills using simulated patients,” Am. J. Pharm. Educ., vol. 82, no. 2, p. 6222, 2018.
[17] J. Belda-Medina and V. Kokošková, “Integrating chatbots in education: insights from the Chatbot-Human Interaction Satisfaction Model (CHISM),” Int. J. Educ. Technol. High. Educ., vol. 20, no. 1, p. 62, 2023.
[18] C.-H. Wu, Y.-S. Chen, and T. Chen, “An adaptive e-learning system for enhancing learning performance: Based on dynamic scaffolding theory,” EURASIA J. Math. Sci. Technol. Educ., vol. 14, no. 3, pp. 903–913, 2017.