A Review of Scoring Algorithms for Ability and Aptitude Tests.Report as inadecuate

A Review of Scoring Algorithms for Ability and Aptitude Tests. - Download this document for free, or read online. Document in PDF available to download.

In conventional practice, most educators and educational researchers score cognitive tests using a dichotomous right-wrong scoring system. Although simple and straightforward, this method does not take into consideration other factors, such as partial knowledge or guessing tendencies and abilities. This paper discusses alternative scoring models: (1) credit for omissions; (2) disproportionate correction for wrong versus omitted items (correcting for guessing); (3) scoring only for items that a given examinee is expected to get right based on one-parameter item response theory (Lawson, 1991); and (4) scoring using various partial credit models, including misinformation. The literature regarding the utility of each algorithm, including validity and reliability, is also summarized briefly. Psychologists should be familiar with alternative scoring strategies, since such strategies can be useful in the design, administration, or analysis of results from measures of cognitive abilities, especially in high stakes testing. Findings from this exploration indicate that correction for guessing formulas do not show significant benefits over conventional scoring (no correction), and while results on partial credit scoring algorithms are inconclusive, the observed slight increases in reliability and validity do not justify the additional complexity, time, and cost involved in developing, administering, scoring, and interpreting test results. (Contains 1 table and 20 references.) (Author/SLD)

Descriptors: Ability, Algorithms, Aptitude Tests, Cognitive Tests, Guessing (Tests), High Stakes Tests, Item Response Theory, Reliability, Scoring, Validity

Author: Chevalier, Shirley A.

Source: https://eric.ed.gov/?q=a&ft=on&ff1=dtySince_1992&pg=8584&id=ED417220

Related documents