The Ultimate ‘Outcome’

January 17, 2010

We often use the example of assessing driving skills in faculty development workshops.

We like the example because most of us feel that we know a good driver when we see one (and that we’re good drivers!), but it is an immediately difficult skill to assess. Try developing a rubric for driving skills…

There are also lots of regional peculiarities that help personalize the conversation. Examples:

  • Jug handles in New Jersey
  • Ice/snow driving in Maine
  • Requirement in Connecticut to verify that you are not a child molester when purchasing/operating a van (!)

In one such workshop, a teacher made a terrific assessment insight: they argued that observing how a driver acted at a suburban stop sign would tell you a lot about their driving skill and attitude. Do they:

  • Screech to a halt?
  • Roll through the stop sign?
  • Ignore the “if two cars arrive at the same time to a 4-way stop, the car to your right goes first” rule?
  • Let the car’s momentum end, then accelerate away after looking both ways?

This ‘outcome’ trumps a lot of minute detail that ‘experts’ try to build into such assessments (remember, everyone thinks they know how to drive, so their rubrics are immediately complex).

Last night I saw the documentary film In A Dream, which contains an even higher level form of outcome: if you were trying to measure parenting skill, imagine what would become evident if an adult child of the parent created a documentary. And captured on film the father abandoning the mother for another woman. After 40 years of marriage.

It’s a great documentary, available on DVD. And how more subjective can you get than parenting over a lifetime? The results require thinking and analysis on the ‘assessors’ part – obviously any filmmaker will have an agenda, a viewpoint, and edit footage for certain effect. But the film feels like an authentic, balanced portrait and is intensely moving.

Assessing a presentation, or a collage, or a term paper should seem fairly simple in comparison. So rather than worry about multiple criteria measured nine different ways, think about what leading indicators are appropriate for your students and the task at hand.


Outcomes Assessment and Grades

February 16, 2009

A couple of recent articles in Inside Higher Ed caught our eye – one on grades and grade inflation, and the other on the creation of the National Institute for Learning Outcomes Assessment.

It seems obvious to us that grading and assessment are largely the same thing. Barring sampling programs, or initiatives designed to assess program outcomes (aggregating student results rather than considering the success of individuals), grading IS assessment.

It’s just that the typical grade (A-, B+ etc.) is an extraordinarily blunt instrument.

Imagine reading a car review (okay, bad example – who is reading car reviews anymore?) or a film review that is simply a letter grade. Many reviews feature letter grades, but they come after a thousand words of measured criticism. And it is subjective criticism, but we largely accept the skill of a Roger Ebert and take their points seriously. They are assessing the film, and they do it through a narrative response built upon well-established criteria.

Education is even messier than film reviewing, because the letter grades awarded are all over the place. To draw the analogy out a little farther, imagine trying to pick a movie to see from the following:

  • 20 films, all rated B+ or higher (with no narrative or other information)
  • 20 films, each rated four times by separate reviewers, where the individual grades are all over the map but the averages are still B+ or higher

You wouldn’t know which film to see…and likewise our system of letter grades is useless for assessing knowledge. Read the rest of this entry »


Effective Peer Review: Leveraging the Learning Management System

November 30, 2008

Introduction

Peer review is a widely accepted practice, particularly in writing classes, from high school through college and graduate school. The goal of peer review is typically two-fold:

  1. To help students get valuable feedback at the draft stage of their work.
  2. To help students more deeply understand the goals of the assignment.

Unfortunately, peer review is often used as a busy-work activity, or a process that takes advantage of conscientious students while allowing others to do superficial work. For instance, many teachers will hand out a list of peer review questions in class, and then give students 30 minutes to review two papers written by their colleagues. An open-ended question might be:

  • “Did the writer adequately summarize and discuss the topic? Explain.”

Many students will write “Yes” under this question and move on. Without review by the instructor (difficult to do when many instructors have 50 to 150 students), these students can destroy the social contract of a peer review. Other students will spend a lot of time making line edits to the draft – correcting grammar, making minor changes to sentences etc. At the draft stage this is probably inappropriate – the focus should be on ideas and big-picture organization, not embroidery. Plus, some students aren’t qualified to be dictating where the semicolon should go.

Students aren’t alone in having these problems; in 1982, Nancy Sommers published her highly influential piece, “Responding to Student Writing,” in which she commented about how little teachers understand the value of their commenting practices, and that, essentially, they don’t know what their comments do. She raised numerous long-standing points in her evaluation of teachers’ first and second draft comments on papers.

Two of her major findings:

  1. Teachers provide paradoxical comments that lead students to focus more on “what teachers commanded them to do than on what they are trying to say” (151).
  2. She found “most teachers’ comments are not text-specific and could be interchanged, rubber-stamped, from text to text” (152). One result is that revising, for students, becomes a “guessing game” (153). Sommers concluded by saying, “The challenge we face as teachers is to develop comments which will provide an inherent reason for students to revise” (156). Read the rest of this entry »

Assessing Critical Thinking

July 10, 2008

Many of our users know about the Waypoint Public Library – a shared library of both Assignments and Elements created by our clients. Each month we’ll highlight a unique approach to assessment and feedback and make it easy for you to copy and utilize it.

As a first installment we thought we’d start with a double-shot of critical thinking, a crucial skill difficult to assess and of interest to educators from middle school through graduate school. These Assignments don’t formally address “critical thinking” as a skill, but seek to differentiate summarizing facts from making original connections while synthesizing information.

There are two versions of this Waypoint Assignment: one intended for peer review (pdf), and the other for an instructor (pdf) to use. Specific references (to writing handbooks etc.) have been removed. You can easily copy these Assignments from the Public Library and edit them to suit. The two Assignments are:

They both make use of Checklists, but you’ll notice that the first few Observations in the instructor versions have traditional ‘rubric’ choices. So the detailed Observations could be easily dropped and the Checklist Element converted to a Performance Element.

>> Read more about copying an Assignment from the Public LIbrary
>> See the detailed version of the instructor Assignment
>> See the detailed version of the peer review Assignment


Sophisticated Rubrics and and the Power of Feedback

June 23, 2008

We recently presented on a simple change to collecting work from students: ask them to include a cover letter, addressed to the instructor, with their submission of work. This cover letter should reflect upon the previous feedback they have received (from instructors and, most recently, their peer reviewers if applicable). It could also give the reader an overview of the goals of their work and specify areas in which they (the student) are most interested in receiving feedback.

We thought it useful to illustrate this process with examples.

The following example is from a college-level writing class where students were studying the effect of war on culture (and vice-versa). They were given the following assignment:

We will read some articles about Iraq, it’s effects, a history of horror movies, and a detailed account of the US involvement in Somalia.

Project 1 is an academic paper: formal diction, MLA citation formatting, credible research – the works. In four or five pages (1,000 to 1,250 words), you will make an original argument concerning the impact of art on war, or conversely war on art. By ‘art’ we mean the visual arts, music, film, novels – almost any creative undertaking. You can be quite liberal in your selection…just be prepared to defend the choice.

The key here is originality. Did Woodstock influence the Vietnam War? It’s probably easy to argue that it did (and also that it didn’t, since we were in Vietnam for another 6 years). Did the poetry of Wilfred Owen horrify the English so much that they avoided a second war with the Germans? No. And neither of these approaches would make a worthy paper.

Your four to five page paper must have at least three credible sources (not including the assigned work for the class).

It is worth noting that the students read several lengthy articles reviewing pop-culture icons (like the Saw series of films) that argued deep connections to more serious issues than the students might at first see. So they were set up to engage intellectually with material of their own choosing, and connect it to a war. There were several Harry Potter essays, but the results were satisfyingly diverse.

Here is an example cover letter from a student:

Example Student Cover Letter

Example cover letter: Click for a larger image

Here is the rubric used to assess and respond to the student, as it appears in Waypoint (interactive rubric software – the rubric could be translated to a paper-based approach):

Argumentative Essay Rubric

Rubric used to assess and respond: Click for a larger image

This student received the following feedback, along with an annotated document (created in Microsoft Word, then appended to the feedback in Waypoint), from the instructor:

Sample Feedback

Feedback to student: Click for a larger image

Needless to say, this kind of feedback is unusual in any educational setting – but the above was created in about 8 minutes, and the cover letter process (along with other tips and tricks) helps make sure the process is constructive and useful.

It is worth noting that this assignment was given in the middle of the academic term, so the students could be expected to learn from the feedback, then apply it in a final project that did not receive this kind of detailed feedback.


Blackboard Version 8, Peer Review, and Outcomes Assessment

May 20, 2008

We were pleasantly surprised to see Waypoint (web-based software for creating and using interactive rubrics…find out more here) featured in Bill Vilburg’s LMSPodcast series.

Bill is the Director of Instructional Advancement at the University of Miami, and does in-depth interviews on issues concerning Learning Management Systems. He has ambitiously set out to interview the all of the presenters at this year’s Blackboard World Conference in Las Vegas.

Last week he interviewed Dr. Rosemary Skeele, from Seton Hall University and Dan Driscoll, from Drexel.

All the interviews that Bill does are in-depth and wonderfully paced. The most exciting aspect to the interviews is how little time is spent talking about Waypoint. The interviews are all about the challenges of designing effective peer reviews, leveraging Blackboard and Blackboard Vista, and developing data that is used to improve curricula. Waypoint is just the mechanism.

Peer review, in particular, is an under-utilized tool in education. When done right (just listen to Dan Driscoll’s process) it is a fantastic way for teachers to coach more, grade less, and radically alter students’ relationship with writing. With the release of Blackboard Version 8, there is a window of attention on the subject because v.8 has a rudimentary Likert Scale commenting tool built into it. Since Waypoint was designed from day one with peer review in mind – peer review of any artifact or product – and is based on sound composition and pedagogical theory, we look forward to an increased dialogue on the subject.

You can find the podcasts here:

LMS 43 Dan Driscoll, Drexel University

Dan Driscoll uses the Waypoint add-on system to create a peer review system in his first-year composition courses at Drexel. He discusses how he sets up the rubrics and then has the students fill them out. The process of applying the rubric to the papers gves students as much or more value than the feedback given back to the original author. Dan will be presenting “Course-Embedded Assessment and the Peer Review Process” at BbWorld’08, July 15-17.

>> Play the Podcast

LMS 42 Rosemary Skeele, Seton Hall

Rosemary Skeele describes how Seton Hall is using the Waypoint addon for Blackboard to help assess learning, primiarily for accreditation purposes. Waypoint allows you to integrate rubrics into Blackboard and in the process opens new possibilities. Rosemary will be presenting “Blackboard and Waypoint: Perfect Together” at BbWorld’08, July 15-17.

>> Play the Podcast


Course-Embedded Assessment, Part Two – Closing the Loop

May 7, 2008

To continue a summary of our presentation at the NC State Assessment Symposium…

Our ‘closing the loop’ example was the most detailed of the best practices that we presented because I was personally involved in the project.

Closing the loop refers to not just collecting data, but using it to inform decision-making. It is the focus, seemingly, of most accrediting agencies. In fact, one engineering school I visited recently was criticized by ABET for collecting too much data and not doing very much with it. I don’t think that’s just an engineering issue, as much as the stereotype of engineers might suggest it, but a reality of outcomes assessment. We can collect all kinds of data but using it to change the process can be the most challenging part of the process.

I was impressed, at several different sessions at NC State, to see educators excited at the prospect of even slender amounts of data. As Dr. Ken Bain and his colleagues at Montclair State University’s Research Academy argue, repositioning the whole accreditation/outcomes/teaching debate as a question of academic inquiry rather than external requirements can be quite powerful. Educators are researchers – no matter whether teaching economics, English, or third grade. So it makes sense that presented with data, educators begin to see their own teaching as an area worthy of research.

We presented the changes brought to a first-year engineering program, particularly having to do with research skills.

The challenge is one familiar to most educators: how do we teach students to value the library databases and scholarly resources available to them, and understand the differences between Wikipedia, Google searches, and corporate websites?

The consensus amongst a team of first-year humanities instructors, who taught an interdisciplinary first year covering composition, design and research, and literature, was that research skills could be taught more effectively. The process saw 30 sections of first-year engineers marched to the library in January for a 60 minute presentation by librarians on the library databases.

This approach was a classic catch-22: since the students were just beginning their research project, they didn’t have any vested interest in learning the ins and outs of ProQuest and LexisNexis. But when they did need the info, later in the term as they finished up length design proposals, it was too late to teach them. So these 50 minute sessions were often dry and difficult for students and teachers alike, even when the librarians tried to engage the students with examples of design projects past. Read the rest of this entry »