Zur Seitenansicht

Titelaufnahme

Titel
Computerized Adaptive Testing for Competency Tests in Austria / submitted by Armin Haba
AutorInnenHaba, Armin
Betreuer / BetreuerinGrün, Bettina
ErschienenLinz, 2020
Umfangv, 80 Blätter : Illustrationen
SpracheEnglisch
DokumenttypMasterarbeit
SchlagwörterTest / Eignungstest / Assessment-Center
Schlagwörter (EN)CAT / MST / multistage / IQS / simulation study / catR / mstR
Schlagwörter (GND)Linz
URNurn:nbn:at:at-ubl:1-34084 
Zugriffsbeschränkung
 Das Werk ist gemäß den "Hinweisen für BenützerInnen" verfügbar
Links
Nachweis
Dateien
Klassifikation
Abstract

This thesis serves as a feasibility study for the implementation of computerized adaptive testing for competency tests in Austria. Competency tests are valid and reliable assessments that use test items to measure competencies of students. The general purpose of such assessments is to create a profile of student performance at a specific point in time.

When a test is done in an adaptive way the test items are not chosen beforehand, but instead the item selection is based on the performance of the test subject. The better a student performs the more difficult are the items being selected. There are several advantages and disadvantages regarding adaptive testing. This thesis tries to analyze some of these advantages and disadvantages with regard to competency tests in Austria. The analyses are based on simulations using data from real life performances of Austrian students on a large scale assessment in linear (fixed form) design.

The principle topic of the thesis is a comparison of estimators obtained with computerized adaptive testing (CAT), multistage testing (MST) and fixed form tests. Estimators are generally compared by precision and accuracy, given a fixed test length of 68 items. An empirical investigation of the test length required to attain a specific precision is also performed. Additionally, competency tests in Austria have to comply with the specification that certain content areas need to be balanced within a test (content balancing). Every test item belongs to a specific content area, and the number of items in each content area within a test should be similar. Applying content balancing puts constraints on the test generation process, so there is particular interest in how strong this affects the resulting estimates. Determining the quality of content balancing acquired by different test environments is also part of this thesis. Finally, a short excursion is conducted to research the impact of an extensive item pool on ability estimates in computerized adaptive testing with content balancing.

Statistik
Das PDF-Dokument wurde 381 mal heruntergeladen.