Conferences in Research and Practice in Information Technology
  

Online Version - Last Updated - 20 Jan 2012

 

 
Home
 

 
Procedures and Resources for Authors

 
Information and Resources for Volume Editors
 

 
Orders and Subscriptions
 

 
Published Articles

 
Upcoming Volumes
 

 
Contact Us
 

 
Useful External Links
 

 
CRPIT Site Search
 
    

Considerations in Automated Marking

Fenwick, J.

    With large classes and high demands on the time of teaching academics, (as well as the need to keep marking budgets under control) evaluating the functional correctness of programming assignments can be challenging. Entirely automating the evaluation process may seem desirable but that would deny students formative feedback from more experienced programmers. This in turn reduces their opportunity to correct errors in their practice. Instead, this paper contains a discussion of marking processes where much of the “heavy lifting" or repetitive work is automated but still allows for human feedback. We discuss the impact of automated marking on assessment design, students, and where the hard work is hidden. The literature contains descriptions of many projects for automating various parts of the process with varying interfaces and levels of integration with external systems. In the author's opinion though, that they are not strictly required, and we describe a simpler set of requirements.
Cite as: Fenwick, J. (2015). Considerations in Automated Marking. In Proc. 17th Australasian Computing Education Conference (ACE 2015) Sydney, Australia. CRPIT, 160. D'Souza, D and Falkner, K. Eds., ACS. 111-118
pdf (from crpit.com) pdf (local if available) BibTeX EndNote GS