Conferences in Research and Practice in Information Technology
  

Online Version - Last Updated - 20 Jan 2012

 

 
Home
 

 
Procedures and Resources for Authors

 
Information and Resources for Volume Editors
 

 
Orders and Subscriptions
 

 
Published Articles

 
Upcoming Volumes
 

 
Contact Us
 

 
Useful External Links
 

 
CRPIT Site Search
 
    

A Method of Automatic Grade Calibration in Peer Assessment

Hamer, J., Ma, K.T.K. and Kwong, H.H.F.

    Once the exclusive preserve of small graduate courses, peer assessment is being rediscovered as an effec- tive and efficient learning tool in large undergraduate classes, a transition made possible through the use of electronic assignment submissions and web-based support software. Asking large numbers of undergraduates to grade each others work raises a number of obvious concerns. How will mark reliability and validity be maintained? Can plagiarism be detected or prevented? What ef- fect will 'rogue' reviewers have on the integrity of the process? Will effective learning actually occur? In this paper we address the issue of grade relia- bility, and present a novel technique for identifying and minimising the impact of 'rogues.' Simulations suggest the method is effective under a wide range of conditions.
Cite as: Hamer, J., Ma, K.T.K. and Kwong, H.H.F. (2005). A Method of Automatic Grade Calibration in Peer Assessment. In Proc. Seventh Australasian Computing Education Conference (ACE2005), Newcastle, Australia. CRPIT, 42. Young, A. and Tolhurst, D., Eds. ACS. 67-72.
pdf (from crpit.com) pdf (local if available) BibTeX EndNote GS
 

 

ACS Logo© Copyright Australian Computer Society Inc. 2001-2014.
Comments should be sent to the webmaster at crpit@scem.uws.edu.au.
This page last updated 16 Nov 2007