View Article |
Grammaticality judgement test: do item formats affect test performance?
Tan, B.H1, Nor Izzati, M.N2.
A grammaticality judgement test (GJT) is one of the many ways to measure language proficiency and knowledge of grammar. It was introduced to second language research in the mid 70s. GJT is premised on the assumption that being proficient in a language means having two types of language knowledge: receptive knowledge or language competence; and productive knowledge or language performance. GJT is meant to measure the former. In the test, learners judge and decide if a given item, usually taken out of context, is grammatical or not. Over the years, GJT has been used by researchers to collect data about specific grammatical features in testing hypotheses, and data collected by a GJT are said to be more representative of a learner’s language competence than naturally occurring data. Collecting such data also allows the collection of negative evidence (ungrammatical samples) to be compared with production problems such as slips and incomplete sentences. Despite the usefulness of GJT, its application is riddled with controversies. Other than reliability issues, it has been debated that certain item formats are more reliable than others. Therefore, the present study seeks to determine if two different item formats correlate with the English language proficiency of 100 ESL undergraduates.
Affiliation:
- Universiti Putra Malaysia, Malaysia
- Universiti Putra Malaysia, Malaysia
Download this article (This article has been downloaded 332 time(s))
|
|
Indexation |
Indexed by |
MyJurnal (2019) |
H-Index
|
0 |
Immediacy Index
|
0.000 |
Rank |
0 |
Indexed by |
Scopus (SCImago Journal Rankings 2016) |
Impact Factor
|
- |
Rank |
Q2 (Arts and Humanities (miscellaneous)) Q2 (Business, Management and Accounting (miscellaneous)) Q2 (Economics, Econometrics and Finance (miscellaneous)) Q2 (Social Sciences (miscellaneous)) |
Additional Information |
0.333 (SJR) |
|
|
|