TY - JOUR TI - Automated Essay Scoring Effect on Test Equating Errors in Mixed-format Test AB - Scoring constructed-response items can be highly difficult, time-consuming, and costly in practice. Improvements in computer technology have enabled automated scoring of constructed-response items. However, the application of automated scoring without an investigation of test equating can lead to serious problems. The goal of this study was to score the constructed-response items in mixed-format tests automatically with different test/training data rates and to investigate the indirect effect of these scores on test equating compared with human raters. Bidirectional long-short term memory (BLSTM) was selected as the automated scoring method for the best performance. During the test equating process, methods based on classical test theory and item response theory were utilized. In most of the equating methods, errors of the equating resulting from automated scoring were close to the errors occurring in equating processes conducted by human raters. It was concluded that automated scoring can be applied because it is convenient in terms of equating. AU - UYSAL, IBRAHIM AU - DOĞAN, NURİ DO - 10.21449/ijate.815961 PY - 2021 JO - International Journal of Assessment Tools in Education VL - 8 IS - 2 SN - 2148-7456 SP - 222 EP - 238 DB - TRDizin UR - http://search/yayin/detay/1131141 ER -