Equating by Using Circle Equation Approach: Applied Mathematics Formula for Prevent Discrimination

Deni Iriyadi, Ahmad Rustam, Hevriana Hartati

Abstract


This study aims to determine the accuracy of the equating method that uses a circle equation approach in terms of its circular arc (Simplified Circle Arc). This research uses 2015 National Examination data from two questions packages. Using the number of preliminary samples as many as 2135 on the X and 2271 test devices on the Y test. After doing a Rasch analysis using a Mean Square Outfit (MNSQ), the data was acquired and analyzed. Following this, replication was performed up to a maximum of 50 times for each kind of data distribution. For each replication, up to a maximum of 50 respondents were selected from the original data set to be used as data for score equalization. The Root Mean Square Error (RMSE) statistic is then used to analyze the outcomes of the equating score. The results showed that the average RMSE group that has the same distribution will provide a lower RMSE value compared to groups that have different data distributions. The low average RMSE value indicates the accuracy of the equal of the scores performed. Thus, the use of the SCA method is highly recommended to equalize scores, especially in small samples in classes at school to prevent discrimination in grading.

Keywords


Circle Equation; Data Distribution; Equating; Simplified Circle Arc; Prevent Discrimination.

Full Text:

DOWNLOAD [PDF]

References


Albano, A. D. (2015). A General Linear Method for EquatingWith Small Samples. Journal of Educational Measurement, 52(1), 55–69. https://doi.org/10.21831/pep.v19i1.4551

Altintas, Ö., & Wallin, G. (2021). Equality of admission tests using kernel equating under the non-equivalent groups with covariates design. International Journal of Assessment Tools in Education, 8(4), 729–743. https://doi.org/10.21449/ijate.976660

Antara, A., & Bastari, B. (2015). Penyetaraan Vertikal Dengan Pendekatan Klasik Dan Item Response Theory Pada Siswa Sekolah Dasar. Jurnal Penelitian Dan Evaluasi Pendidikan, 19, 13–24. https://doi.org/10.21831/pep.v19i1.4551

Arikan, Ç. A., & Gelbal, S. (2018). A comparison of traditional and kernel equating methods. International Journal of Assessment Tools in Education, 5(3), 417–427. https://doi.org/10.21449/ijate.409826

Aşiret, S., & Sünbül, S. Ö. (2016). Investigating test equating methods in small samples through various factors. Educational Sciences: Theory & Practice, 16(2). https://doi.org/https://doi.org/10.12738/estp.2016.2.2762

Babcock, B., & Hodge, K. J. (2020). Rasch versus classical equating in the context of small sample sizes. Educational and Psychological Measurement, 80(3), 499–521. https://doi.org/10.1177/0013164419878483

Battauz, M. (2017). Multiple equating of separate IRT calibrations. Psychometrika, 82(3), 610–636. https://doi.org/10.1007/s11336-016-9517-x

Caglak, S. (2016). Comparison of several small sample equating methods under the NEAT design. Turkish Journal of Education, 5(3), 96–118. https://doi.org/10.19128/turje.16916

Cain, M. K., Zhang, Z., & Yuan, K.-H. (2017). Univariate and multivariate skewness and kurtosis for measuring nonnormality: Prevalence, influence and estimation. Behavior Research Methods, 49, 1716–1735. https://doi.org/10.3758/s13428-016-0814-1

Diao, H., & Keller, L. (2020). Investigating repeater effects on small sample equating: Include or exclude? Applied Measurement in Education, 33(1), 54–66. https://doi.org/10.1080/08957347.2019.1674302

Dimitrov, D., & Atanasov, D. (2021). An Approach to Test Equating under the Latent D -scoring Method. Measurement: Interdisciplinary Research and Perspectives, 19, 153–162. https://doi.org/10.1080/15366367.2020.1843107

Dorans, N. J., & Puhan, G. (2017). Contributions to score linking theory and practice. Advancing Human Assessment: The Methodological, Psychological and Policy Contributions of ETS, 79–132. https://library.oapen.org/bitstream/handle/20.500.12657/28140/1001854.pdf?sequence=1

Dziak, J. J., Lanza, S. T., & Tan, X. (2014). Effect size, statistical power, and sample size requirements for the bootstrap likelihood ratio test in latent class analysis. Structural Equation Modeling: A Multidisciplinary Journal, 21(4), 534–552. https://doi.org/10.1080/10705511.2014.919819

Ee, N. S., & Yeo, K. J. (2018). Item Analysis for the Adapted Motivation Scale Using Rasch Model. International Journal of Evaluation and Research in Education, 7(4), 264–269. https://eric.ed.gov/?id=EJ1198607

Epskamp, S., Borsboom, D., & Fried, E. I. (2018). Estimating psychological networks and their accuracy: A tutorial paper. Behavior Research Methods, 50, 195–212. https://doi.org/10.3758/s13428-017-0862-1

Falani, I., Iriyadi, D., Ice, Y. W., Susanti, H., & Nasuition, R. A. (2022). A Rasch analysis of perceived stigma of covid-19 among nurses in Indonesia questionnaire. Psychological Thought, 15(1), 12. https://doi.org/10.37708/psyct.v15i1.530

Haris, D. J., & Kolen, M. J. (2016). A Comparison of Two Equipercentile Equating Methods for Common Item Equating. Educational and Psychological Measurement, 50(1). https://doi.org/https://doi.org/10.1177/0013164490501

Himelfarb, I. (2019). A primer on standardized testing: History, measurement, classical test theory, item response theory, and equating. Journal of Chiropractic Education, 33(2), 151–163.

Hippel, P. V. (2010). Skewness. International Encyclopedia of Statistical Science, 100, 1–4. https://doi.org/10.4135/9781412952644

Hsiao, Y. Y., Shih, C. L., Yu, W. H., Hsieh, C. H., & Hsieh, C. L. (2015). Examining unidimensionality and improving reliability for the eight subscales of the SF-36 in opioid-dependent patients using Rasch analysis. Quality of Life Research, 24(2), 279–285. https://doi.org/10.1007/s11136-014-0771-z

Iriyadi, D., Asdar, A. K., Afriadi, B., Samad, M. A., & Syaputra, Y. D. (2024). Analysis of the Indonesian Version of the Statistical Anxiety Scale (SAS) Instrument with a Psychometric Approach. International Journal of Evaluation and Research in Education, 13(2). http://doi.org/10.11591/ijere.v13i2.26517

Karunasingha, D. S. K. (2022). Root mean square error or mean absolute error? Use their ratio as well. Information Sciences, 585, 609–629. https://doi.org/10.1016/j.ins.2021.11.036

Kemendikbud. (2017). Peraturan Menteri Pendidikan Dan Kebudayaan Republik Indonesia Nomor 17 Tahun 2017: Penerimaan peserta didik baru pada taman kanak-kanak, sekolah dasar, sekolah menengah pertama, sekolah menengah atas, sekolah menengah kejuruan, atau bentuk lain. In Retrieved from https://psma.kemdikbud.go.id/index/home/lib/files/SALINAN%20PPDB.pdf.

Kolen, M. J., & Brennan, R. L. (2014). Test Equating, Scaling, and Linking (Third Edit). Springer. https://doi.org/10.1007/978-1-4939-0317-7

LaFlair, G. T., Isbell, D., May, L. D. N., Gutierrez Arvizu, M. N., & Jamieson, J. (2017). Equating in small-scale language testing programs. Language Testing, 34(1), 127–144. https://doi.org/10.1177/026553221562082

Liemohn, M. W., Shane, A. D., Azari, A. R., Petersen, A. K., Swiger, B. M., & Mukhopadhyay, A. (2021). RMSE is not enough: Guidelines to robust data-model comparisons for magnetospheric physics. Journal of Atmospheric and Solar-Terrestrial Physics, 218, 105624.

Lissitz, R. W., & Huynh, H. (2019). Vertical equating for state assessments: Issues and solutions in determination of adequate yearly progress and school accountability. Practical Assessment, Research, and Evaluation, 8(1), 10. https://doi.org/10.7275/npzw-wd59

Livingston, S. A. (2014). Equating test scores (without IRT). Educational Testing Service. https://eric.ed.gov/?id=ED560972

Masino, S., & Niño-Zarazúa, M. (2016). What works to improve the quality of student learning in developing countries? International Journal of Educational Development, 48, 53–65. https://doi.org/10.1016/j.ijedudev.2015.11.012

Moses, T. (2022). Linking and comparability across conditions of measurement: Established frameworks and proposed updates. Journal of Educational Measurement, 59(2), 231–250. https://doi.org/10.1111/jedm.12322

Müller, M. (2020). Item fit statistics for Rasch analysis: can we trust them? Journal of Statistical Distributions and Applications, 7, 1–12. https://link.springer.com/article/10.1186/s40488-020-00108-7

O’Neill, T. R., Gregg, J. L., & Peabody, M. R. (2020). Effect of sample size on common item equating using the dichotomous Rasch model. Applied Measurement in Education, 33(1), 10–23. https://doi.org/10.1080/08957347.2019.1674309

Ozdemir, B. (2017). Equating TIMSS Mathematics Subtests with Nonlinear Equating Methods Using NEAT Design: Circle-Arc Equating Approaches. International Journal of Progressive Education, 13(2), 116–132. https://eric.ed.gov/?id=EJ1145605

Peabody, M. R. (2020). Some methods and evaluation for linking and equating with small samples. Applied Measurement in Education, 33(1), 3–9. https://doi.org/10.1080/08957347.2019.1674304

Pommerich, M. (2016). The fairness of comparing test scores across different tests or modes of administration. Fairness in Educational Assessment and Measurement, 111–134. https://doi.org/10.4324/9781315774527-9

Rahayu, W., Putra, M. D. K., Iriyadi, D., Rahmawati, Y., & Koul, R. B. (2020). A Rasch and factor analysis of an Indonesian version of the Student Perception of Opportunity Competence Development (SPOCD) questionnaire. Cogent Education, 7(1), 1721633. https://doi.org/https://doi.org/10.1080/2331186X.2020.1721633

Razali, N. M., & Wah, Y. B. (2011). Power comparisons of shapiro-wilk, kolmogorov-smirnov, lilliefors and anderson-darling tests. Journal of Statistical Modeling and Analytics, 2(1), 21–33. https://doi.org/https://doi.org/doi:10.1515/bile-2015-0008

Sainani, K. L. (2012). Dealing With Non-normal Data. PM and R, 4(12), 1001–1005. https://doi.org/10.1016/j.pmrj.2012.10.013

Schalet, B. D., Lim, S., Cella, D., & Choi, S. W. (2021). Linking scores with patient-reported health outcome instruments: A validation study and comparison of three linking methods. Psychometrika, 86(3), 717–746. https://doi.org/10.1007/s11336-021-09776-z

Singh, A. S., & Masuku, M. B. (2014). Sampling techniques & determination of sample size in applied statistics research: An overview. International Journal of Economics, Commerce and Management, 2(11), 1–22. https://www.researchgate.net/publication/341552596_Sampling_Techniques_and_Determination_of_Sample_Size_in_Applied_Statistics_Research_An_Overview

Sinnema, C., Ludlow, L., & Obinson, V. (2016). Journal of Educational Administration and History. Journal of Educational Administration and History, 35(2). https://doi.org/10.1080/713676155

Souza, A. C. de, Alexandre, N. M. C., & Guirardello, E. de B. (2017). Psychometric properties in instruments evaluation of reliability and validity. Epidemiologia e Servicos de Saude, 26, 649–659. https://doi.org/10.5123/S1679-49742017000300022

Sumarni, W. (2015). The strengths and weaknesses of the implementation of project based learning: A review. International Journal of Science and Research, 4(3), 478–484. https://www.ijsr.net/archive/v4i3/SUB152023.pdf

Uto, M. (2021). Accuracy of performance-test linking based on a many-facet Rasch model. Behavior Research Methods, 53(4), 1440–1454. https://doi.org/10.3758/s13428-020-01498-x

Uysal, İ., & Kilmen, S. (2016). Comparison of Item Response Theory Test Equating Methods for Mixed Format Tests. International Online Journal of Educational Sciences, 8(2). https://doi.org/https://doi.org/10.15345/iojes.2016.02.001

von Davier, M., Yamamoto, K., Shin, H. J., Chen, H., Khorramdel, L., Weeks, J., Davis, S., Kong, N., & Kandathil, M. (2019). Evaluating item response theory linking and model fit for data from PISA 2000–2012. Assessment in Education: Principles, Policy & Practice, 26(4), 466–488. https://doi.org/10.1080/0969594X.2019.1586642

Wang, W., & Lu, Y. (2018). Analysis of the mean absolute error (MAE) and the root mean square error (RMSE) in assessing rounding model. IOP Conference Series: Materials Science and Engineering, 324, 12049. https://doi.org/10.1088/1757-899X/324/1/012049

Yu, C. H., & Osborn-Popp, S. E. (2019). Test equating by common items and common subjects: Concepts and applications. Practical Assessment, Research, and Evaluation, 10(1), 4. https://www.researchgate.net/publication/253274721_Test_Equating_by_Common_Items_and_Common_Subjects_Concepts_and_Applications




DOI: https://doi.org/10.31764/jtam.v8i3.22195

Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 Deni Iriyadi, Ahmad Rustam, Hevriana Hartati

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

_______________________________________________

JTAM already indexing:

                     


_______________________________________________

 

Creative Commons License

JTAM (Jurnal Teori dan Aplikasi Matematika) 
is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License

______________________________________________

_______________________________________________

_______________________________________________ 

JTAM (Jurnal Teori dan Aplikasi Matematika) Editorial Office: