Once the exclusive preserve of small graduate courses, peer assessment is being rediscovered as an effec- tive and efficient learning tool in large undergraduate classes, a transition made possible through the use of electronic assignment submissions and web-based support software. Asking large numbers of undergraduates to grade each others work raises a number of obvious concerns. How will mark reliability and validity be maintained? Can plagiarism be detected or prevented? What ef- fect will 'rogue' reviewers have on the integrity of the process? Will effective learning actually occur? In this paper we address the issue of grade relia- bility, and present a novel technique for identifying and minimising the impact of 'rogues.' Simulations suggest the method is effective under a wide range of conditions. |
Cite as: Hamer, J., Ma, K.T.K. and Kwong, H.H.F. (2005). A Method of Automatic Grade Calibration in Peer Assessment. In Proc. Seventh Australasian Computing Education Conference (ACE2005), Newcastle, Australia. CRPIT, 42. Young, A. and Tolhurst, D., Eds. ACS. 67-72. |
(from crpit.com)
(local if available)
|