Information

Dear user, the application need JavaScript support. Please enable JavaScript in your browser.

Title of the item:

The Use of Open-Ended Questions in Large-Scale Tests for Selection: Generalizability and Dependability

Title:
The Use of Open-Ended Questions in Large-Scale Tests for Selection: Generalizability and Dependability
Author(s):
Atilgan, Hakan; Demir, Elif Kübra (ORCID 0000-0002-3219-1644) ; Ogretmen, Tuncay; Basokcu, Tahsin Oguz
Source:
International Journal of Progressive Education, v16 n5 p216-227 2020. 12 pp.
Availability:
International Association of Educators. Available from: PEN Academic Publishing. e-mail: ; e-mail: ; Web site: http://www.inased.org/ijpe.htm; Web site: http://ijpe.penpublishing.net/
Peer Reviewed:
Y
ISSN:
1554-5210
Descriptors:
Foreign Countries, Secondary School Students, Test Items, Test Reliability, Interrater Reliability, Grade 8, Generalizability Theory, Tests
Abstractor:
As Provided
Language:
English
Number of Pages:
12
Education Level:
Secondary Education; Elementary Education; Grade 8; Junior High Schools; Middle Schools
Publication Type:
Journal Articles; Reports - Research
Journal Code:
JAN2022
Entry Date:
2020
Accession Number:
EJ1273106
Academic Journal
It has become a critical question what the reliability level would be when open-ended questions are used in large-scale selection tests. One of the aims of the present study is to determine what the reliability would be in the event that the answers given by test-takers are scored by experts when open-ended short answer questions are used in large-scale selection tests. On the other hand, another aim of the study is to reveal how reliability changes upon changing the number of items and raters and what the required number of items and raters is to reach a sufficient degree of reliability. The study group consisted of 443 8th grade students from three secondary schools located in three different towns of the city of Izmir. These students were given a test including 20 open-ended short answer questions which was developed within the scope of the study. Students' answers were rated by four experienced teachers independently of one another. In the analyses, G theory's fully crossed two-facet design p "x" i "x" r with students (p), items (I) and raters (r). The analyses found "Ep"[superscript 2] and [phi]=0,855 and it was concluded that well-educated raters in rating open-ended short answer questions can achieve consistent scoring at an adequate level.

We use cookies to help identify your computer so we can tailor your user experience, track shopping basket contents and remember where you are in the order process.

Feedback

Your views are very important to us and can be enormously helpful in showing us where we can make improvements. We'd be very grateful if you would spend a few minutes completing the short form.

Feedback form