|
| | | |
Measuring Improvement in Latent Semantic Analysis-Based Marking Systems: Using a Computer to Mark Questions about HTML
Haley, D.T., Thomas, P., De Roeck, A. and Petre, M.
This paper proposes two unconventional metrics as an important tool for assessment research: the Manhattan (L1) and the Euclidean (L2) distance measures. We used them to evaluate the results of a Latent Semantic Analysis (LSA) system to assess short answers to two questions about HTML in an introductory computer science class. This is the only study, as far as we know, that addresses the question of how well an LSA-based system can evaluate answers in the very specific and technical language of HTML. We found that, although there are several ways to measure automatic assessment results in the literature, they were not useful for our purposes. We want to compare the marks given by LSA to marks awarded by a human tutor. We demonstrate how L1 and L2 quantify the results of varying the amount of training data necessary to enable LSA to mark the answers to two HTML questions. Although this paper describes the use of the metrics in one particular case, it has more general applicability. Much fine-tuning of an LSA marking system is required for good results. A researcher needs an easy way to evaluate the results of various modifications to the system. The Manhattan and the Euclidean distance measures provide this functionality. |
Cite as: Haley, D.T., Thomas, P., De Roeck, A. and Petre, M. (2007). Measuring Improvement in Latent Semantic Analysis-Based Marking Systems: Using a Computer to Mark Questions about HTML. In Proc. Ninth Australasian Computing Education Conference (ACE2007), Ballarat, Australia. CRPIT, 66. Mann, S. and Simon, Eds. ACS. 35-42. |
(from crpit.com)
(local if available)
|
|