Designing Assessment for Active Learning
Robert Runté, Faculty of Education, University of Lethbridge
Volume 13 Number 3 (April 2004)

A mid-career colleague had become increasingly dissatisfied with his traditional chalk-and-talk teaching and decided to experiment with a more active case-based approach. Initially, he waxed enthusiastic about the difference it was making: Students, he said, were more directly engaged with the material; were mastering key concepts sooner and at a deeper level; were going well beyond rote memorization to critically analyze cases; and even class attendance had dramatically improved, because students wanted to hear and participate in the—now lively—class discussion.

Two months later, however, he pronounced the experiment a complete failure. He told me that his students had lost interest and stopped reading the cases, if they bothered to show up at all; discussions had turned into long torturous silences, punctuated by antagonistic sniping; worst of all, students were now performing well below previous classes. He now regretted ever having changed.

Puzzled by this, I asked if he was perhaps now holding students to too high a standard. He assured me that this was not the case, because he had used the exact same test as in previous years. Typical on his tests, was this question: "In the second case study, the home office of company X was located in which city? (a) Toronto, (b) Montreal, (c) New York, (d) San Francisco." When I challenged my colleague as to what possible significance such a question could have, he answered, somewhat defensively, that in order to discuss the case studies, students needed to have read them thoroughly, and by asking this very specific question about a minor detail that had only turned up in a footnote on the final page, he could determine which students had read the case all the way through.

Unfortunately, his is an all too common misconception. A better way of checking whether students have read and understood a case is by examining whether they obtained the knowledge and skills necessary to apply the concepts to a new case; but the problem here goes much deeper than simply a failure to assess higher level thinking. In my view, it was this question, and the other rote memorization multiple-choice questions on the test, that had killed his course. It was not that the novelty had worn off, but that students had been ambushed by the first mid-term. Having come prepared to discuss all the exciting concepts from their case studies, they were confronted instead with a test that asked them for rote memorization of arbitrary and trivial details. No wonder the students who had actually mastered the key concepts had nevertheless scored poorly.

Students quickly learn that what ultimately counts is what is on the test, and so had subsequently abandoned the class, refusing to be drawn into class discussions now recognized to be redundant, and no longer bothering to read cases they had no hope of memorizing verbatim. One can attempt all the exciting, novel and engaging activities in class one can imagine, but if the test only rewards rote memorization, then true learning will be sabotaged. Indeed, the rising expectations generated by improved instruction will lead to greater resentment and more negative comments on course evaluations than if one had simply lectured.

We therefore need to modify our approaches to evaluation if we truly want to move towards more active learning. Active learning cannot occur without "active evaluation".

The Need for Active Evaluation

Partly, this means a move away from an over-reliance on tests and the traditional essay towards other assessment techniques. Testing often encourages passive learning, particularly when tests are drawn from publishers' test-banks (often of questionable quality) or written by faculty who lack the training necessary to develop questions that can assess higher thinking skills. Unless properly written, tests and essay assignments can promote memorization and regurgitation without understanding, and predispose students to passively accept whatever instructors or texts tell them, rather than critically engaging with the materials.

If we want active learning, we must find ways to evaluate and reward active engagement with the material: for better or worse, assessment always drives our classes; so, if we want particular behaviours, if we want to promote particular types of knowledge or skills or attitudes, we must start by designing the evaluations that will elicit those behaviours, skills and attitudes. There are many contexts in which tests and term papers are appropriate, but these are only two of many assessment tools, and should only be used when they make pedagogical sense. We need to broaden our assessment repertoire to ensure that our evaluation promotes active learning.

More fundamentally, however, "active evaluation" requires a change in our approach to assessment.

First, we have to stop viewing assessment as something separate from instruction. Assessment and learning interpenetrate and need to be interconnected: Doing the case study is the assessment in a case-based course; doing the inquiry is the assessment in an inquiry-based course. Once we stop scheduling evaluation as a separate activity (e.g., examination week), everything changes: For example, it is unthinkable to tell a student the answer to a test question during a test, but it is completely appropriate to answer students' questions during a case study, or to help students with their inquiry. Whereas instructors are now sometimes curiously reluctant to help students lest it "give them an unfair advantage" on an assessment, without the artificial barrier between assessment and learning, we are free to coach students to be more successful.

Second, we need to emphasize assessment as a means of promoting learning, rather than primarily as a means of ranking students for employers or further education. We must therefore reject the "talent hunt" model of assessment, in which the purpose of evaluation is to identify that tiny elite capable of becoming future scientists or scholars. The talent hunt model of assessment certifies ability that is already there, and ranks students, but does not take a very active role in helping students improve, except in the crudest "sink or swim" approach to motivation. In my view, our purpose is not simply to identify five out of five hundred who can make it, and to discard the rest as chaff, but rather to bring all five hundred up to their fullest possible potential—the higher the achievement of the lowest common denominator, the higher the overall achievement of the field.

In contrast to the talent hunt model, active evaluation means helping students become active learners—and that means an approach focused on helping students improve. Such improvement depends upon detailed and frequent (often continual) feedback, and a greater emphasis on formative rather than summative evaluation. Formative evaluation allows for greater risk-taking because it encourages an attitude where mistakes are seen as opportunities to be embraced, rather than as something shameful.

Equally important is an emphasis on helping students become self-monitoring. Students should not have to wait until they are told by an instructor to know how they are doing. If we truly want active learners, we need to move from an external locus of control to student self-assessment. This means helping students learn how to evaluate themselves, perhaps through peer or self-grading, but more fundamentally, by teaching them about the standards of the discipline or profession. This in turn implies that our assignments must have clear criteria and be designed to engage students in activities to develop the knowledge and skills they will require in their professional lives or specific discipline.

Principles for Achieving Active Evaluation

First, if we want active learners, we need to design assessments that create conditions for, and reward, active engagement. We therefore need clear objectives: if we want researchers, then we should introduce activities that get students researching, not answering multiple choice questions about how to research. Students can generally hit any target they can see and that holds still for them. We need to identify through explicit rubrics or criteria exactly what we are looking for. This is a difficult, time consuming activity that may evolve over several iterations of a course, but it is absolutely necessary if we are to create the necessary conditions for students to take responsibility for their own achievement.

This clarity goes against the grain for many instructors. Colleagues often complain to me that if they told the students the criteria for assignments, they would all get 'A's. At one level, I have to question what would be wrong with students all mastering the course content, but the more fundamental problem here is that if the instructors really believe that, then what they are actually saying is that they have to resort to trickery to cheat the majority of students out of their 'A' to maintain a talent hunt distribution. In contrast, in one of the courses in which I still use an essay examination, I print the exam question right in the course outline. This helps focus student learning, and I spend a lot less time dealing with off topic or vacuous answers. Students still generally spread themselves over a normal curve, but with the difference that the bar can be considerably raised.

Thus, active evaluation implies and requires higher standards, because objectives are clear.

Second, active evaluation requires that assessments be authentic. In part, that means assignments that match real world tasks and provide transferability of learning beyond the context of the current course. Wherever possible, this should include a product that is itself useful to the student or others. The major problem with student plagiarism, for example, is that students have trouble seeing the relevancy of assignments to their own lives or learning. "This guy wants a paper on Macbeth—I'll see if I can find him one on the Internet..." Term papers started out as authentic assessments when the purpose of university was the production of scholars and the research paper was a practice piece for a future career in academic publishing, but now make little sense when universities are mass institutions involved in the production of forest workers, teachers, social workers, etc., none of whom will likely ever publish. Although term papers remain a useful assessment of literacy, style, logic, etc, these same abilities may be assessed in other assignments more suitable to the immediate needs and interests of our professional schools.

In part, authentic assessment means providing students with a real audience: classroom peers or the public. When the instructor is the only reader, students leave out documentation because they expect the instructor to "already know that"; they write about process rather than providing the final product ("I went to the library, but the book was out, so then I..."); they dismiss the need for correct spelling and grammar as the idiosyncratic hobbyhorse of "unreasonable" instructors, rather than as inherent in the writing task; and worst of all, they write what they believe the professor wants to hear rather than writing from the heart. Given a real audience (e.g., the production of a web site or poster), they are more likely to understand the need for correct grammar and full documentation, and less willing to cater up to the instructor's views when they know they will be held publicly accountable for whatever they write.

In part, authentic assessment means allowing greater student ownership of the topic. Growing concerns about student plagiarism have driven many instructors to specify ever more narrowly defined topics in hopes that they may be too esoteric to show up on the internet, but in doing so they are dictating ever more alienating assignments with little connection to either the real world or to students own interests or needs. Instead, students should be encouraged to tackle questions of personal or professional interest, where the desire to answer the question is itself sufficient motivation for completing the assignment.

This in turn implies that active evaluation requires real questions. Tests can only test students on material for which there are clear right and wrong answers, or at least, right and wrong ways of supporting answers. In these types of assessment, we cannot ask questions for which we do not know the answers. But in active learning we often do exactly that. We allow and encourage students to pursue an inquiry for which no one yet knows the answer.

Finally, active evaluation implies sustained engagement with some assignment, rather than a series of fragmented tasks. Such assignments may be broken down into phases, stages, or steps to facilitate the pacing of work, and the frequency and timeliness of feedback, but active evaluation requires students to become deeply involved with a particular project over a significant period.

When students take personal ownership of a real question for a real audience over the course of a term, they are more likely to present us with their best work.