He charges upstairs to the bedroom of the victim’s son and finds there a pair of shoes that match the footprint. He arrests the young man and ends the investigation. None of the neighbours are questioned. No witnesses are identified. No autopsy is conducted. No other evidence is collected whatsoever.
The suspect denies the allegation. He urges Lieutenant Sole to speak to some friends with whom he claims to have been playing Chutes and Ladders that evening. This potential alibi is never followed up.
The district attorney’s office is initially concerned that the case, based solely on the footprint evidence, might be a bit weak, but they are reassured when Judge Treadwell is assigned to hear the case.
At trial, the defense offers surveillance camera footage that shows someone else entering and leaving the victim’s home prior to the murder as well as several threatening texts from an unknown number. In rendering her guilty verdict, however, Judge Treadwell notes that since the prosecutor had already convinced her with the footprint, she really had not seen any need for her to consider any other evidence.
I suspect that most of us would be pretty shocked if we were to come across a story like that on the evening news. What investigator would deliberately choose to rely on such limited evidence in preparing a case, and what judge in deciding it? It seems a course perfectly calculated to arrive at wholly inaccurate conclusions.
Yet the approach to judging the level of student achievement in school has in the past tended to follow fairly closely the example of Lieutenant Sole and Judge Treadwell. Teachers would rely on a relatively small number of tests or essays, or in some cases a single final exam, to assign a grade to months of learning, ignoring all manner of other good (and in many cases better) evidence.
If we want our students to be as successful as possible, then we need to come to conclusions about their learning and achievement that are as accurate as possible. Assessment and evaluation, the processes for coming to those conclusions about student learning, are not at all dissimilar to what is done by the detective and the judge.
Since we cannot actually see into the brains of our students to know what is going on there, we need them to do things to show us what and how they are learning. Tests, essays, and projects—what we refer to as products—are certainly one important way that this can be done, but they alone cannot provide sufficient evidence.
Consequently, we are moving to make greater and greater use of two other important sources of information about student learning: observations of students as they work and conversations with them about their learning. This is referred to as triangulation of evidence and, like triangulation in many other fields of endeavour, offers us a much more accurate picture of where students are in their learning than does reliance on only one type of data.
The inclusion of assessment data from observations and conversations is not meant to replace or alter the results of a test or other product. Nor is it simply a way, as is sometimes cynically suggested, for those who have done poorly on an assignment to improve their marks after the fact. On the contrary, since conversations and, especially, observations tend to a better job of illuminating student mastery of process, these kinds of assessment data are often gathered before the product is completed. By drawing upon a wider range of different types of evidence, we can avoid the blind spots that are created by relying on only a single source and improve the validity and reliability of assessment.