Improving peer evaluation of writing

Teaching students to evaluate their course assignments has several benefits… at least theoretically. Students spend time focusing on the rubric and goals of the assignment, and learn how to provide useful feedback to a “coworker,” a skill likely to be valuable in their future career. Analyzing a work in your discipline requires critical thinking and higher-level Bloom’s Taxonomy skills. If the evaluation is successful, students are more able to recognize examples of clear and effective work in any genre.

Unfortunately, many instructors find peer evaluations produce simplistic and overly-gentle feedback, no evidence of deeper learning, and expressed irritation by the students for “having to do the teacher’s work.” Below are several suggestions to make peer evaluations more successful in your course, plus some ideas that move away from traditional peer evaluations altogether.

Incorporate peer assessment into the learning goals of your course.

If one of the reasons you are using peer assessment is to teach students a skill they will need in their discipline (as part of journal review, or working in teams), tell students regularly that you are using peer evaluation to improve their peer evaluation skills, not just to reduce your workload. If the work you are having them do is not directly relevant to “real world” work, make other benefits of the assignment clear. In Scott Freeman’s introductory biology course at the University of Washington, he assigns “practice exam questions” for students to answer and then peer grade online. Here, the benefit is giving students practice thinking like an exam grader, so their own exam answers will be better on the next exam.

Train students how to evaluate well.

While browsing for references for this post, I came across a recommended form to give students to use while peer grading. It contained questions like: “Rate the clarity of this paper: 5  4  3  2  1.” This is not a question that will produce useful feedback from a student. Does the student know what “clarity” is? Have they seen several examples of clear writing and unclear writing on similar topics, so they can differentiate between them? Can they explain why they gave a “4” instead of a “3?”

An instructor has two options, given that students are not naturally good evaluators of work they are not experienced with. The evaluation rubric could be extremely objective, with questions like “does this answer meet the required length?” or “does the writer provide three quotes from the text?” or “are all three references listed from appropriate academic journals that can be accessed through the UCI Library?” These are the sorts of evaluations that students can complete regardless of their familiarity with the assignment or genre.

If you want a more complete, analytical, or comparative analysis, you will need to provide students with training. Examples of excellent methods of teaching students how to evaluate include:

  • Provide three short, anonymous assignments and have students rate each, then compare their scores within a small group. Discuss with the class how you would evaluate the same writings.
  • If you are assigning different assignment types (proposals, blog posts, research paper, journal article summary) AND want students to understand an element found in each (clarity, APA references, voice), be sure to point out how the element is expressed in each assignment type. Again, provide examples of good and bad work.

You may even want to remove the “peer” from the evaluation, and regularly have students evaluate a “sample” assignment in class based on the rubric you will use to grade their actual assignment. This sort of assignment was done in biology at the University of British Columbia, and the instructors report that it generated a lot of student discussion in class about the assignment and the grading used by the instructor. Because everyone was talking about the SAME work, the analysis was much deeper than if everyone was working on a different work.

Recognize external pressures on students.

Every class has a certain culture of behavior, and generally the culture is “don’t say disrespectful things to others in the class.” Which is great, but is probably why most evaluations of papers tend toward weak and overly nice. Talk to students about what are or are not appropriate things to say in the evaluation. Create a list of “actual student comments” and have student groups decide which in the list are either too weak, too mean, or appropriate and helpful. Reinforce the idea that honest evaluations are a tool that helps the student get a better grade when the assignment is actually graded, so being nice is actually harming the evaluatee.

Consider using an online tool or organizational method of evaluations that allows for anonymous (to students) evaluators. When students know their identities are shielded from other students but not the instructor, they can be more honest with their feedback.

Create good evaluation rubrics.

It’s easy to think of a bad rubric question for a student to use while evaluating, and harder to think of really good ones. But once you have talked to students regularly about how to evaluate, give them instructions that guide good evaluations.

  • Dr. Greg McClure teaches composition at UCI, and requires his students to go beyond asking a question or pointing out a deficiency by requiring the reviewer to write out a corrective example that does demonstrate the principle.
  • Dr. Tagert Ellis also teaches composition with regular peer evaluations. In order to focus on several different aspects of review, he sets up “stations” that each focus on a single task. The readers at the station will read several papers together and focus on the single issue. Both the discussion and the focus improves their evaluation skill.

Think outside the box.

UCI’s Professor Beth van Es teaches a graduate course for future teachers, and one of her assignments is for students to sit in small groups and watch short videos of each others’ teaching. Her primary goal is not for evaluators to judge “how good” the teaching of the other student is, but instead to provide opportunities for these students to implement a theory of teaching practice, and then have support in their own study of how well that implementation worked. In this environment, the focus is on learning theory and the assignment itself, rather than the student.

Similarly, Professor Sharon Block in History uses group work instead of peer evaluations to teach analytical writing. Students all bring their writing to the group, and the group picks the best piece of analysis from all the papers to share with the class. Or students will each write a sample “analytic question” on the board, and the class as a whole will discuss whether it is analytic or factual, and how it might be rewritten to improve it. A sample of her recent classwork is pictured below:

IMG_20160425_122343

As in Dr. van Es’s example, the focus is less on the student, and more on practicing and improving the skill.

Perhaps this will give you an idea for an assignment that is not a traditional “peer assessment” of written work. Is there a skill that your discipline requires that students would benefit from an evaluation partner? Can you use videotape and group work to move the focus to the implementation of a theory, or the effectiveness of a technique rather than the quality of the assignment itself? Dr. van Es encourages clear frameworks even in this sort of evaluation to guide the discussion and produce feedback that the “evaluatee” can reflect and act upon.