Rasch modeling has long been an approach for educational measurement. Nonetheless, a strategic review of recent "Educational Researcher" volumes and "Language Testing" issues reveals limited use of the approach, suggesting its full potential is yet to be realized. This article investigates patterns of Rasch item estimates in tests of foundation English courses at a university in Thailand. Not only could this draw attention of the academia to the approach, but it also potentially touches upon a little-researched area in program evaluation. It is hypothesized that the test items of the English examinations are of incremental difficulty, from the first prerequisite to the last course in the series of foundation English courses. Multiple Rasch analyses are performed on item responses, tackling a rebuttal to the validity argument. A key finding, however, is that the results are mixed: the tests of the courses are not always aligned in accordance with the expected gradient difficulty. Implications for the use of the approach and for the finding are also discussed. These include a call for more studies with Rasch modeling and a call for scrutinizing language courses and examinations that are administered as a series. A framework for dealing with null results is also advocated, whereby such findings ought to be articulated with clarity. [ABSTRACT FROM AUTHOR]
Copyright of International Journal of Assessment & Evaluation is the property of Common Ground Research Networks and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)