Surveys Are Easy

May 29, 2008

Schools put a lot of effort into surveys: alumni surveys, course surveys, faculty surveys.

An article in yesterday’s Chronicle summarizes work done at Cornell University to study the effectiveness of surveys of student engagement. Here’s the main take-away:

Their paper examines response rates of Cornell’s class of 2006 as the students progress through the university. In the fall of 2002, the authors say, 96 percent of first-time, full-time freshmen responded to the Cooperative Institutional Research Program Freshman Survey, a paper-and-pencil questionnaire administered by the Higher Education Research Institute at the University of California at Los Angeles.

But in similar surveys, given online in the students’ freshman, sophomore, and junior years, the response rates were 50, 41, and 30 percent, respectively. A final survey of graduating seniors collected data from 38 percent of them.

Those who completed the follow-up surveys were predominantly women, the Cornell researchers say, and they had higher grade-point averages than those who did not respond.

Surveys are easy: a relatively small number of people (often one) can administer online surveys to thousands of students, then collect the data. Other forms of assessment are much more time consuming and require culture change, the mustering of resources etc. So while the community of survey specialists worry about ‘survey fatigue,’ whether students are completing surveys after 9pm (when they could be ‘partying’), and other questions familiar to most marketing executives, our institutions are increasingly dependent on this single data source for major decision making.

The statisticians argue that something is better than nothing, and that they can control for all kinds of oddities. According to the article, 30% is an acceptable response rate. What is stunning to us about the report is that it is considered news. Every institution has this kind of data: the ability to bounce student attributes against survey data (and other data). But, in general, student evaluations are taken in a very literal way (as anecdotal evidence, without context) and are used to make major decisions about tenure and curriculum redesign.

Do results like Cornell’s invalidate the process? Not at all – but they should cause changes to the survey process (the goal of outcomes assessment: to improve processes through rigorous analysis). Similarly, the vast amount of data available in a school’s back-end database (Banner, Datatel) should be put to much greater use. How much variability in grade inflation occur in particular courses? Which courses/programs receive the best course evaluations? This data leads to questions that would help improve outcomes while addressing some of the incoherence students experience. Tying these various data streams together would help build a complete picture. So if a teacher gets slammed for being too ‘hard’ on course evaluations? Perhaps they are grading significantly harder than their colleagues (which doesn’t necessarily mean they should be the ones to change!). This sort of inquiry is second-nature to most academics; we just don’t apply our analytical and research skills to our most important undertaking: teaching.


Blackboard Version 8, Peer Review, and Outcomes Assessment

May 20, 2008

We were pleasantly surprised to see Waypoint (web-based software for creating and using interactive rubrics…find out more here) featured in Bill Vilburg’s LMSPodcast series.

Bill is the Director of Instructional Advancement at the University of Miami, and does in-depth interviews on issues concerning Learning Management Systems. He has ambitiously set out to interview the all of the presenters at this year’s Blackboard World Conference in Las Vegas.

Last week he interviewed Dr. Rosemary Skeele, from Seton Hall University and Dan Driscoll, from Drexel.

All the interviews that Bill does are in-depth and wonderfully paced. The most exciting aspect to the interviews is how little time is spent talking about Waypoint. The interviews are all about the challenges of designing effective peer reviews, leveraging Blackboard and Blackboard Vista, and developing data that is used to improve curricula. Waypoint is just the mechanism.

Peer review, in particular, is an under-utilized tool in education. When done right (just listen to Dan Driscoll’s process) it is a fantastic way for teachers to coach more, grade less, and radically alter students’ relationship with writing. With the release of Blackboard Version 8, there is a window of attention on the subject because v.8 has a rudimentary Likert Scale commenting tool built into it. Since Waypoint was designed from day one with peer review in mind – peer review of any artifact or product – and is based on sound composition and pedagogical theory, we look forward to an increased dialogue on the subject.

You can find the podcasts here:

LMS 43 Dan Driscoll, Drexel University

Dan Driscoll uses the Waypoint add-on system to create a peer review system in his first-year composition courses at Drexel. He discusses how he sets up the rubrics and then has the students fill them out. The process of applying the rubric to the papers gves students as much or more value than the feedback given back to the original author. Dan will be presenting “Course-Embedded Assessment and the Peer Review Process” at BbWorld’08, July 15-17.

>> Play the Podcast

LMS 42 Rosemary Skeele, Seton Hall

Rosemary Skeele describes how Seton Hall is using the Waypoint addon for Blackboard to help assess learning, primiarily for accreditation purposes. Waypoint allows you to integrate rubrics into Blackboard and in the process opens new possibilities. Rosemary will be presenting “Blackboard and Waypoint: Perfect Together” at BbWorld’08, July 15-17.

>> Play the Podcast

Best Practices in Course-Embedded Assessment

April 27, 2008

We’re just finishing up three day’s at NC State’s Assessment Symposium. 500 educators from around the USA have come together to talk about student learning, “closing the loop,” and accreditation.

Many of the sessions are focused not just on data-gathering, but on teaching and learning. A number of attendees have talked about the change they’ve seen since even last year: a focus on bringing assessment into the process of teaching (!). That is, avoiding the mad dash to develop data just for accreditation that often results in two databases of student learning outcomes. One presenter said that on her campus administrators referred to the “shadow database,” which reminded me of a business owner keeping two sets of books – one for the IRS and one for the real world.

We gave a 60 minute presentation on best practices in course-embedded assessment. We must have had at least 50 people in attendance…not to learn about Waypoint as much as to gain insight into how schools execute.

I spoke in three general areas:

  1. Getting faculty help with the challenges of formal assessment
  2. “Closing the loop” – using data to inform changes in curricula
  3. Using a sampling approach to gather data quickly and efficiently for benchmarking purposes

Getting faculty help with the challenges of formal assessment:

We increasingly talk to senior administrators about the need to look at authentic assessment and course-embedded assessment as more than a challenge in software training. This work is not about clicking the right buttons in Blackboard or Waypoint. Read the rest of this entry »

Undergraduates “fighting for feedback”

February 6, 2008

The University of Pennsylvania‘s student newspaper, The Daily Pennsylvanian, recently included a student-written piece on the lack of feedback in higher education. The article is called “Fighting for Feedback,” and it is a show stopper.

If we want to talk about accountability and outcomes assessment, what greater measure is there than students receiving feedback from their professors? David Kanter’s article begins,

The first time I got a paper back from a professor here at Penn, I was a little confused.

Other than a few perfunctory, illegible comments found scribbled in the margins, insightful, constructive criticism was nowhere in sight. I thought (incorrectly, I suppose) that I would receive extensive feedback on each assignment. I soon learned that unmarked papers and vague comments were the norm.

This is a message we’ve heard over and over again. It isn’t a Penn thing. It isn’t an Ivy League thing, or a private school thing. It’s just the old paradigm of pushing information at students rather than helping them discover knowledge for themselves. Some of us are more talented pushers than others, and can make that mode of education work through charisma and talent. But the majority of experiences we’ve all had in our educational careers are closer to what David describes than a true dialogue over issues and ideas. Read the rest of this entry »

ePortfolios hijacked…and the teacher as test pilot…

December 27, 2007

A couple of terrific articles recently that have serious implications (and lots to teach us) as educators.

The first, from Campus Technology, argues that higher education has co-opted the ePortfolio from its intended role as a reflective and creative student project to become a tool for accreditation reporting. Since our focus with Waypoint has always been on the assessment engine, and not the attempt to build yet another portfolio solution, we are in total agreement.

The second article, from The New Yorker magazine, makes a devastating case against the medical establishment (you’d think we would have run out of reasons to bash medicine) and its hubris. Atul Gawande makes a compelling comparison between contemporary medical doctors and the test pilots of The Right Stuff fame. As a teacher, the comparison hit home. Could simple checklists help our students complete the tasks we assign more creatively and more competently?

Here are the citations:

  1. Trent Batson, “The ePortfolio Hijacked, Campus Technology, 12/12/2007
  2. Atul Gawande, “Annals of Medicine: The Checklist,” The New Yorker, 12/10/2007

Dr. Helen Barrett, who has been writing about ePortfolios for years in a variety of media, summarized the issue with the use of ePortfolios eloquently in her blog: Read the rest of this entry »

Plug and Chug and Crank…

November 23, 2007

From middle schools through world-class MBA programs, the more we talk and listen, the more the plug and chug and crank approach to learning clearly doesn’t work.

That line is a famous quote from a professor I had years ago. He still teaches Physics, and is supremely dedicated to his job and his students. A great entertainer, I remember enjoying his lectures for his good-natured teasing, funny stories, and huge energy. But he was teaching engineers basic physics and the math that goes with it. Read the rest of this entry »

Outcomes: LVAIC 2007

August 1, 2007

We spoke at the Lehigh Valley Technology in the Classroom Symposium today.

Higher education institutions from the Lehigh Valley (in and around Allentown, PA – about an hour north of Philadelphia) gathered to share approaches to common challenges, immerse themselves in some of the latest and greatest things going on in ed tech, and relax their way through a very hot PA summer day.

It was a calm and pleasant way to spend some time getting to know some new schools, talk to the Tablet PC wizzes from Gateway, and talk about everybody’s favorite subject, outcomes.

We got the coveted just-before-lunch time slot, just after a very compelling Beth Ritter-Guth described her amazing uses of Second Life to immerse students in the literary worlds of Beowulf, Edgar Allen Poe, and Dante. Tough act to follow. Now if some foundation would just throw a couple of million bucks into making a totally immersing, photo-realistic Yoknapatawpha County.

Read the rest of this entry »