Developing Transparent Evaluation Criteria
Why Develop Evaluation Criteria?
Assessment of student writing—whether in traditional text-based assignments or in newer digital or multimedia formats—can feel uncomfortably subjective. And the truth is that there are inescapably subjective elements to evaluating student writing. However, it’s not entirely subjective, either. Communal standards among members of a discourse community (e.g., experts in a field, department colleagues, TA-faculty teams) can and should be articulated in ways that narrow the scope of what “effective” or “successful” writing looks like in a given context.
For instance, depending on the goals of your assignment and the standards set by your discourse community (however that applies to your teaching situation), you might identify the following elements as a starting point for developing your own evaluation criteria:
- quality of ideas
- persuasiveness of argument
- organization and development
- sentence structure, usage, and mechanics
By taking the time to identify and prioritize evaluation criteria for each of your writing assignments as you design them, you save time down the road by clearly communicating (and explaining!) your expectations to students.
Consider Designing a Rubric
Perhaps the most familiar means of communicating evaluation criteria is through the development of a rubric. Rubrics aim to minimize differences among multiple readers—or across multiple reading sessions for a single reader—in order to achieve higher reliability in the application of communally determined criteria. Criteria might be determined within a discipline/profession, a teaching team, or in collaboration with students.[1]
Rubrics come in many different sizes and shapes, but the primary variations are whether a rubric is analytic vs. holistic, and whether it is generic vs. task-specific.
Analytic vs. Holistic
Evaluation criteria in a rubric can be presented to students either “analytically” or “holistically.” The analytic method gives separate scores for each criterion—for example, ideas, organization, use of evidence, etc. The holistic method gives one score that reflects the reader’s overall impression of the paper, considering all criteria at once.
Analytic | Holistic |
|
|
Analytic rubrics also often specify levels of achievement for each criterion. The “step-down method” can be used to identify varying levels of achievement in a rubric by “stepping down” degrees of performance or merit. Typical language includes terms such as these:
always usually some of the time rarely
fully adequately partially minimally
Generic vs. Task-Specific
Either analytic or holistic rubrics can also be classified as “generic” or “task-specific.” Generic rubrics follow one-size-fits-all designs, aimed for use across a variety of writing tasks (that is, they try to be universal). By contrast, task-specific rubrics are designed to fit an individual assignment or genre.
Generic | Task-Specific |
|
|
See our chapter on How to Build and Use Rubrics Effectively for more detailed information on using rubrics to increase transparency in your evaluation criteria.
- All of this section excerpted or paraphrased from Bean and Melzer, pp. 255-66. ↵