GPCM ANALYSIS OF CONSTRUCTED-RESPONSE ASSESSMENT (CRA): MEASURING ABSTRACTION VIA ANALYTIC CODING

Dwi Rismi Ocy(1*), Wardani Rahayu(2), Iva Sarifah(3), Awaluddin Tjalla(4),

(1) Universitas Negeri Jakarta
(2) Universitas Negeri Jakarta
(3) Universitas Negeri Jakarta
(4) Universitas Negeri Jakarta
(*) Corresponding Author


Abstract


Traditional multiple-choice tests frequently fail to capture the subtle cognitive ability of mathematical abstraction in subjects like trigonometric ratio. The purpose of this study was to create and evaluate a performance-based evaluation tool to gauge students' proficiency with mathematical abstraction in the context of trigonometric ratio. Three constructed-response items made up the instrument, and each one was assessed using a unique analytical rubric that was in line with revised Bloom's taxonomy and abstraction processes. Following person-fit screening, 700 replies from 913 tenth-grade students in six Indonesian high schools were kept. Strong item fit, suitable threshold values, and high discrimination indices were found via Generalized Partial Credit Model (GPCM) analysis, demonstrating the instrument's efficacy in distinguishing between different levels of abstraction. Unidimensionality was confirmed by exploratory factor analysis, which had an excellent model fit and explained 41.0% of the total variance (RMSEA = 0.0301; TLI = 0.9775). Item difficulty levels matched the sample's ability distribution quite well, according to person-item mappings. The construct validity of the analytical rubric was further reinforced by high factor loadings and communalities. Results show that the created tool is theoretically and psychometrically solid, providing a strong means of evaluating Mathematical Abstraction in HOTS. The study demonstrates how GPCM-based analytical scoring may be used to capture complex cognitive performance and guide teaching strategies in accordance with the Merdeka Curriculum.


Keywords


Analytic Rubric; Constructed-Response Assessment; EFA; GPCM; IRT; Mathematical Abstraction

Full Text:

PDF

References


Álvarez-Díaz, M., Muñiz-Bascón, L. M., Soria-Alemany, A., Veintimilla-Bonet, A., & Fernández-Alonso, R. (2020). On the Design and Validation of a Rubric for the Evaluation of Performance in a Musical Contest. International Journal of Music Education, 39(1), 66–79. https://doi.org/10.1177/0255761420936443

Amelia, R., Listiaji, P., Dewi, N. R., Heriyanti, A. P., Atmaja, B. D., Shoba, T. M., & Sajidi, I. (2024). Developing and Validating a Rubric for Measuring Skills in Designing Science Experiments for Prospective Science Teachers. Jurnal Inovasi Pendidikan Ipa, 10(1), 32–46. https://doi.org/10.21831/jipi.v10i1.65853

Angraini, L. M. (2018). Pengaruh concept attainment model terhadap disposisi berpikir kritis matematis mahasiswa. JNPM (Jurnal Nasional Pendidikan Matematika), 2(2), 284. https://doi.org/10.33603/jnpm.v2i2.1473

Attali, Y., Laitusis, C., & Stone, E. (2016). Differences in Reaction to Immediate Feedback and Opportunity to Revise Answers for Multiple-Choice and Open-Ended Questions. Educational and Psychological Measurement, 76(5), 787–802. https://doi.org/10.1177/0013164415612548

Bonifay, W., & Cai, L. (2017). On the Complexity of Item Response Theory Models. Multivariate Behavioral Research, 52(4), 465–484. https://doi.org/10.1080/00273171.2017.1309262

Buchholz, J., & Hartig, J. (2017). Comparing Attitudes Across Groups: An IRT-Based Item-Fit Statistic for the Analysis of Measurement Invariance. Applied Psychological Measurement, 43(3), 241–250. https://doi.org/10.1177/0146621617748323

Bürkner, P., Schwabe, R., & Holling, H. (2018). Optimal Designs for the Generalized Partial Credit Model. British Journal of Mathematical and Statistical Psychology, 72(2), 271–293. https://doi.org/10.1111/bmsp.12148

de Ayala, R. J. (2009). The Theory and Practice of Item Response Theory (D. A. Kenny, Ed.). The Guilford Press.

Dimitrov, D. M., & Luo, Y. (2019). A Note on the D-Scoring Method Adapted for Polytomous Test Items. Educational and Psychological Measurement, 79(3), 545–557. https://doi.org/10.1177/0013164418786014

Eckerly, C., Jia, Y., & Jewsbury, P. (2022). Technology‐Enhanced Items and Model–Data Misfit. ETS Research Report Series, 2022(1), 1–16. https://doi.org/10.1002/ets2.12353

Edimuslim, E. (2022). Analisis Kemampuan Abstraksi Matematis Siswa Sekolah Menengah Pertama Ditinjau Dari Gaya Belajar Tipe Kolb. Suska Journal of Mathematics Education, 8(1), 39. https://doi.org/10.24014/sjme.v8i1.16831

Erni, E., Ma’rufi, M., & Ilyas, M. (2022). Pengaruh Kemadirian Belajar Terhadap Kemampuan Berpikir Kreatif Matematika Siswa. Kognitif Jurnal Riset Hots Pendidikan Matematika, 2(1), 53–61. https://doi.org/10.51574/kognitif.v2i1.386

Essen, C. B., Idaka, I. E., & Metibemu, M. A. (2017). Item Level Diagnostics and Model - Data Fit in Item Response Theory (IRT) Using BILOG - MG v3.0 and IRTPRO v3.0 Programmes. Global Journal of Educational Research, 16(2), 87. https://doi.org/10.4314/gjedr.v16i2.2

Faisal, A. F., Lambertus, L., & Baharuddin, B. (2020). Pengaruh Kemandirian Belajar Matematik Siswa Terhadap Kemampuan Berpikir Kreatif Matematis Siswa SMA Negeri 03 Bombana. Jurnal Pembelajaran Berpikir Matematika (Journal of Mathematics Thinking Learning), 5(2). https://doi.org/10.33772/jpbm.v5i2.15749

Felt, J. M., Castaneda, R., Tiemensma, J., & Depaoli, S. (2017). Using Person Fit Statistics to Detect Outliers in Survey Research. Frontiers in Psychology, 8. https://doi.org/10.3389/fpsyg.2017.00863

Flora, D. B., & Flake, J. K. (2017). The purpose and practice of exploratory and confirmatory factor analysis in psychological research: Decisions for scale development and validation. Canadian Journal of Behavioural Science / Revue Canadienne Des Sciences Du Comportement, 49(2), 78–88. https://doi.org/10.1037/cbs0000069

Gibson Jr., T. O., Morrow, J. A., & Rocconi, L. M. (2020). A Modernized Heuristic Approach to Robust Exploratory Factor Analysis. The Quantitative Methods for Psychology, 16(4), 295–307. https://doi.org/10.20982/tqmp.16.4.p295

Gilbert, M. C. (2016). Relating aspects of motivation to facets of mathematical competence varying in cognitive demand. The Journal of Educational Research, 109(6), 647–657. https://doi.org/10.1080/00220671.2015.1020912

Gos, E., Sagan, A., Skarżyński, P. H., & Skarżyńśki, H. (2020). Improved Measurement of Tinnitus Severity: Study of the Dimensionality and Reliability of the Tinnitus Handicap Inventory. Plos One, 15(8), e0237778. https://doi.org/10.1371/journal.pone.0237778

Hamhuis, E. R., Glas, C. A. W., & Meelissen, M. R. (2020). Tablet Assessment in Primary Education: Are There Performance Differences Between TIMSS’ Paper‐and‐pencil Test and Tablet Test Among Dutch Grade‐four Students? British Journal of Educational Technology, 51(6), 2340–2358. https://doi.org/10.1111/bjet.12914

Hawai, M. F. (2021). Proses Berpikir Matematis Siswa Dalam Menyelesaikan Soal PISA Kategori HOTS Dan Scaffoldingnya. Mathedunesa, 10(1), 95–109. https://doi.org/10.26740/mathedunesa.v10n1.p95-109

Huen, J. M. Y., Yip, P. S. F., Osman, A., & Leung, A. N. M. (2023). Item Response Theory and Differential Item Functioning Analyses With the Suicidal Behaviors Questionnaire–Revised in US and Chinese Samples. Crisis, 44(2), 108–114. https://doi.org/10.1027/0227-5910/a000837

Karakuş, G., & Ocak, G. (2022). The Implementation of Cooperative Problem-Solving Rubric Towards Turkish Fourth Grade Students. Mimbar Sekolah Dasar, 9(1), 1–23. https://doi.org/10.53400/mimbar-sd.v9i1.39390

Khairunnisa, I., Ariyanto, L., & Endahwuri, D. (2021). Analisis Berpikir Kreatif Matematis Ditinjau Dari Motivasi Belajar Siswa. Imajiner Jurnal Matematika Dan Pendidikan Matematika, 3(6), 527–534. https://doi.org/10.26877/imajiner.v3i6.8087

Kim, J., & Wilson, M. (2019). Polytomous Item Explanatory Item Response Theory Models. Educational and Psychological Measurement, 80(4), 726–755. https://doi.org/10.1177/0013164419892667

Kuo, B.-C., Chen, C.-H., Yang, C.-W., & Mok, M. M. C. (2016). Cognitive diagnostic models for tests with multiple-choice and constructed-response items. Educational Psychology, 36(6), 1115–1133. https://doi.org/10.1080/01443410.2016.1166176

Lorenzo‐Seva, U., & Ferrando, P. J. (2023). A Simulation-Based Scaled Test Statistic for Assessing Model-Data Fit in Least-Squares Unrestricted Factor-Analysis Solutions. Methodology, 19(2), 96–115. https://doi.org/10.5964/meth.9839

Mustangin, M., & Setiawan, Y. E. (2021). Pemahaman Konsep Mahasiswa Semester Satu Pada Mata Kuliah Trigonometri. Jurnal Elemen, 7(1), 98–116. https://doi.org/10.29408/jel.v7i1.2773

Navas-López, E. A. (2024). Confirmatory Factor Analysis of a Rubric for Assessing Algorithmic Thinking on Undergraduate Students. Cuadernos De Investigación Educativa, 15(2). https://doi.org/10.18861/cied.2024.15.2.3797

Ocy, D. R., Rahayu, W., & Makmuri, M. (2023). Rasch Model Analysis: Development Of Hots-Based Mathematical Abstraction Ability Instrument According To Riau Islands Culture. AKSIOMA: Jurnal Program Studi Pendidikan Matematika, 12(4), 3542–3560.

Prihono, E. W., Lapele, F., Jumaeda, S., Sukadari, S., & Nurjanah, S. (2022). EFA of Pedagogic Competence Instrument to Measure Teacher Performance. https://doi.org/10.2991/assehr.k.220129.059

Rhemtulla, M., Bork, R. v., & Borsboom, D. (2020). Worse Than Measurement Error: Consequences of Inappropriate Latent Variable Measurement Models. Psychological Methods, 25(1), 30–45. https://doi.org/10.1037/met0000220

Setiawan, Y. E. (2022). Prospective Teachers Representations in Problem Solving of Special Angle Trigonometry Functions Based on the Level of Ability. Infinity Journal, 11(1), 55. https://doi.org/10.22460/infinity.v11i1.p55-76

Shrestha, N. (2021). Factor Analysis as a Tool for Survey Analysis. American Journal of Applied Mathematics and Statistics, 9(1), 4–11. https://doi.org/10.12691/ajams-9-1-2

Sukmawan, I., Sridana, N., & Novitasari, D. (2022). Hubungan Konsep Diri Terhadap Kemampuan Berpikir Logis Matematis Siswa SMP Negeri 18 Mataram Tahun Pelajaran 2021/2022. Jurnal Ilmiah Profesi Pendidikan, 7(3b), 1564–1571. https://doi.org/10.29303/jipp.v7i3b.816

Syarifudin, M. T., Ratnaningsih, N., & Ni’mah, K. (2021). Analisis Kemampuan Abstraksi Matematis dalam Pembelajaran Matematika di MAN 1 Tasikmalaya. MUST: Journal of Mathematics Education, Science and Technology, 6(2), 231. https://doi.org/10.30651/must.v6i2.7461

Umlauft, M., Placzek, M., Konietschke, F., & Pauly, M. (2019). Wild Bootstrapping Rank-Based Procedures: Multiple Testing in Nonparametric Factorial Repeated Measures Designs. Journal of Multivariate Analysis, 171, 176–192. https://doi.org/10.1016/j.jmva.2018.12.005

Usman, M. H., & Hussaini, M. M. (2017). Analysis of Students’ Error in Learning of Trigonometry Among Senior Secondary School Students in Zaria Metropolis, Nigeria. Iosr Journal of Mathematics, 13(02), 01–04. https://doi.org/10.9790/5728-1302040104

Wallmark, J., Ramsay, J. O., Li, J., & Wiberg, M. (2023). Analyzing Polytomous Test Data: A Comparison Between an Information-Based IRT Model and the Generalized Partial Credit Model. Journal of Educational and Behavioral Statistics, 49(5), 753–779. https://doi.org/10.3102/10769986231207879

Wang, C., Xu, G., Shang, Z., & Kuncel, N. (2018). Detecting Aberrant Behavior and Item Preknowledge: A Comparison of Mixture Modeling Method and Residual Method. Journal of Educational and Behavioral Statistics, 43(4), 469–501. https://doi.org/10.3102/1076998618767123

Watkins, M. W. (2018). Exploratory Factor Analysis: A Guide to Best Practice. Journal of Black Psychology, 44(3), 219–246. https://doi.org/10.1177/0095798418771807

Watson, J. C. (2017). Establishing Evidence for Internal Structure Using Exploratory Factor Analysis. Measurement and Evaluation in Counseling and Development, 50(4), 232–238. https://doi.org/10.1080/07481756.2017.1336931

Wetzel, E., & Carstensen, C. H. (2014). Reversed Thresholds in Partial Credit Models. Assessment, 21(6), 765–774. https://doi.org/10.1177/1073191114530775

Wind, S. A. (2022). Detecting Rating Scale Malfunctioning With the Partial Credit Model and Generalized Partial Credit Model. Educational and Psychological Measurement, 83(5), 953–983. https://doi.org/10.1177/00131644221116292

Yanti, N. F., & Wijaya, A. (2023). Meta-Analisis: Pengaruh Penerapan Model Pembelajaran Problem-Based Learning Terhadap Kemampuan Berpikir Kritis Matematis Siswa. Aksioma Jurnal Program Studi Pendidikan Matematika, 12(1), 1213. https://doi.org/10.24127/ajpm.v12i1.6750

Yao, L., & Schwarz, R. D. (2006). A Multidimensional Partial Credit Model With Associated Item and Test Statistics: An Application to Mixed-Format Tests. Applied Psychological Measurement, 30(6), 469–492. https://doi.org/10.1177/0146621605284537

Zhao, Y., & Hambleton, R. K. (2017). Practical Consequences of Item Response Theory Model Misfit in the Context of Test Equating With Mixed-Format Test Data. Frontiers in Psychology, 8. https://doi.org/10.3389/fpsyg.2017.00484

Zhou, S., & Huggins‐Manley, A. C. (2020). The Performance of the Semigeneralized Partial Credit Model for Handling Item-Level Missingness. Educational and Psychological Measurement, 80(6), 1196–1215. https://doi.org/10.1177/0013164420918392




DOI: http://dx.doi.org/10.24127/ajpm.v14i4.12976

Refbacks

  • There are currently no refbacks.