Many studies have shown that peer evaluation is more productive when students are properly prepared (e.g. , Falchikov & Goldginch, 2000; Zariski, 1996). A crucial part of this preparation is involving students in defining criteria for the peer evaluation.
This feature describes a face to face activity, which complements an online peer evaluation. In this activity a class discussion, (which should be led by an experienced instructor) is enacted, to decide what the criteria for a peer evaluation should include. Following the discussion, students vote for their preferred criteria, and the most popular ones are used in the online peer evaluation activity.
After trial and refinement of this feature, we found that for our specific contents, which involve morals and beliefs (which brought to biased scoring at initial stages) it was beneficial to seed the discussion with ideas about non-objective criteria.
The figure attached to this feature shows the resulting criteria in the online peer evaluation activity. These criteria are changed each semester according to students suggestions in the face to face discussion.
The Rationale Behind the Feature (Specific Design Principle):
Involve students in developing the evaluation criteria for the peer-evaluation.
Context of Use:
The study took place in an educational-philosophy course for undergraduate level at the Technion, taught by the authors of this feature. The main goal of the course was to help students develop their own perceptions about fundamental issues in education and schooling (e.g. what is the goal of schooling? What contents should be taught in school? What should be the role of the teacher?). A main theme in the course is the “ideal school” project, in which groups of 3-4 students constructed a conceptual model of a school that met their evolving educational perceptions.
Toward the end of the semester each group gave a short presentation of one day in their ideal school. For this purpose, most students used PowerPoint, but other less-conventional means, such as drama-performances were also used. The presentations took place in three class meetings, with three or four presentations in each session. One challenge we faced was how to ensure that students make the most out of these meetings. Prior teaching experience in similar contexts revealed that students tend to focus on accomplishing the course’s requirements (their own presentations in this case) and less interested in their peers’ projects.
This challenge was addressed by designing a peer-evaluation activity, in which students were involved in the assessment of their peers’ “ideal school” presentations. The rationale for engaging students in this activity was: a) to ensure their involvement in their peers’ projects, b) to create a framework for them to learn from each others’ projects, c) to help them develop evaluation skills that they would need as future educators, and d) to reinforce criteria for designing their projects. The analysis of this peer-evaluation activity by the instructor involved the integration of hundreds of assessments (35 students, times 10 groups, times about four criteria).
To help facilitate the analysis we decided to use a computerized system, which enabled us to gather, present and analyze these assessments in a productive manner. The activity was therefore performed online with the CeLS environment (Collaborative e-Leaning Structures), a novel system that allows the instructor to create and conduct a variety of online structured collaborative activities (http://www.mycels.net)
A three-iteration design based research, indicated that when students were better prepared to the peer-evaluation activity, via the proceeding face to face activity, their peer-evaluations were in higher correlation to those of the instructors.
An attitude questionnaire administered at the end of the course indicated that students took the activity more seriously due to their involvement in defining the criteria for the peer evaluation activity.
Additionally, observations in the class showed that through the discussion, and the negotiation of the criteria, students better understood the goals of the project, and the foci for their presentations
Falchikov, N., & Goldfinch, J. (2000). Student Peer Assessment in Higher Education: A meta-analysis comparing peer and teacher marks. Review of Educational Research. 70 (3), 287-322.
Zariski, A. (1996). Student peer assessment in tertiary education: Promise, perils and practice. Proceedings of the 5th Annual Teaching Learning Forum, (Perth: Murdoch). In Abbott, J. and Willcoxson, L. (Eds), Teaching and learning within and across disciplines (pp.189-200). Perth: Murdoch.
Kali, Y., & Ronen, M. (2005). Design principles for online peer-evaluation: Fostering objectivity. Proceedings of CSCL 2005 (Taipei, Taiwan). In Koschmann, T., Suthers, D.D. & Chan, T.W. (Eds), Computer support for collaborative learning: The Next 10 Tears! Lawrence Erlbaum Associates.