Surveys Are Easy

May 29, 2008

Schools put a lot of effort into surveys: alumni surveys, course surveys, faculty surveys.

An article in yesterday’s Chronicle summarizes work done at Cornell University to study the effectiveness of surveys of student engagement. Here’s the main take-away:

Their paper examines response rates of Cornell’s class of 2006 as the students progress through the university. In the fall of 2002, the authors say, 96 percent of first-time, full-time freshmen responded to the Cooperative Institutional Research Program Freshman Survey, a paper-and-pencil questionnaire administered by the Higher Education Research Institute at the University of California at Los Angeles.

But in similar surveys, given online in the students’ freshman, sophomore, and junior years, the response rates were 50, 41, and 30 percent, respectively. A final survey of graduating seniors collected data from 38 percent of them.

Those who completed the follow-up surveys were predominantly women, the Cornell researchers say, and they had higher grade-point averages than those who did not respond.

Surveys are easy: a relatively small number of people (often one) can administer online surveys to thousands of students, then collect the data. Other forms of assessment are much more time consuming and require culture change, the mustering of resources etc. So while the community of survey specialists worry about ‘survey fatigue,’ whether students are completing surveys after 9pm (when they could be ‘partying’), and other questions familiar to most marketing executives, our institutions are increasingly dependent on this single data source for major decision making.

The statisticians argue that something is better than nothing, and that they can control for all kinds of oddities. According to the article, 30% is an acceptable response rate. What is stunning to us about the report is that it is considered news. Every institution has this kind of data: the ability to bounce student attributes against survey data (and other data). But, in general, student evaluations are taken in a very literal way (as anecdotal evidence, without context) and are used to make major decisions about tenure and curriculum redesign.

Do results like Cornell’s invalidate the process? Not at all – but they should cause changes to the survey process (the goal of outcomes assessment: to improve processes through rigorous analysis). Similarly, the vast amount of data available in a school’s back-end database (Banner, Datatel) should be put to much greater use. How much variability in grade inflation occur in particular courses? Which courses/programs receive the best course evaluations? This data leads to questions that would help improve outcomes while addressing some of the incoherence students experience. Tying these various data streams together would help build a complete picture. So if a teacher gets slammed for being too ‘hard’ on course evaluations? Perhaps they are grading significantly harder than their colleagues (which doesn’t necessarily mean they should be the ones to change!). This sort of inquiry is second-nature to most academics; we just don’t apply our analytical and research skills to our most important undertaking: teaching.

Advertisements

Blackboard Version 8, Peer Review, and Outcomes Assessment

May 20, 2008

We were pleasantly surprised to see Waypoint (web-based software for creating and using interactive rubrics…find out more here) featured in Bill Vilburg’s LMSPodcast series.

Bill is the Director of Instructional Advancement at the University of Miami, and does in-depth interviews on issues concerning Learning Management Systems. He has ambitiously set out to interview the all of the presenters at this year’s Blackboard World Conference in Las Vegas.

Last week he interviewed Dr. Rosemary Skeele, from Seton Hall University and Dan Driscoll, from Drexel.

All the interviews that Bill does are in-depth and wonderfully paced. The most exciting aspect to the interviews is how little time is spent talking about Waypoint. The interviews are all about the challenges of designing effective peer reviews, leveraging Blackboard and Blackboard Vista, and developing data that is used to improve curricula. Waypoint is just the mechanism.

Peer review, in particular, is an under-utilized tool in education. When done right (just listen to Dan Driscoll’s process) it is a fantastic way for teachers to coach more, grade less, and radically alter students’ relationship with writing. With the release of Blackboard Version 8, there is a window of attention on the subject because v.8 has a rudimentary Likert Scale commenting tool built into it. Since Waypoint was designed from day one with peer review in mind – peer review of any artifact or product – and is based on sound composition and pedagogical theory, we look forward to an increased dialogue on the subject.

You can find the podcasts here:

LMS 43 Dan Driscoll, Drexel University

Dan Driscoll uses the Waypoint add-on system to create a peer review system in his first-year composition courses at Drexel. He discusses how he sets up the rubrics and then has the students fill them out. The process of applying the rubric to the papers gves students as much or more value than the feedback given back to the original author. Dan will be presenting “Course-Embedded Assessment and the Peer Review Process” at BbWorld’08, July 15-17.

>> Play the Podcast

LMS 42 Rosemary Skeele, Seton Hall

Rosemary Skeele describes how Seton Hall is using the Waypoint addon for Blackboard to help assess learning, primiarily for accreditation purposes. Waypoint allows you to integrate rubrics into Blackboard and in the process opens new possibilities. Rosemary will be presenting “Blackboard and Waypoint: Perfect Together” at BbWorld’08, July 15-17.

>> Play the Podcast


Course-Embedded Assessment, Part Two – Closing the Loop

May 7, 2008

To continue a summary of our presentation at the NC State Assessment Symposium…

Our ‘closing the loop’ example was the most detailed of the best practices that we presented because I was personally involved in the project.

Closing the loop refers to not just collecting data, but using it to inform decision-making. It is the focus, seemingly, of most accrediting agencies. In fact, one engineering school I visited recently was criticized by ABET for collecting too much data and not doing very much with it. I don’t think that’s just an engineering issue, as much as the stereotype of engineers might suggest it, but a reality of outcomes assessment. We can collect all kinds of data but using it to change the process can be the most challenging part of the process.

I was impressed, at several different sessions at NC State, to see educators excited at the prospect of even slender amounts of data. As Dr. Ken Bain and his colleagues at Montclair State University’s Research Academy argue, repositioning the whole accreditation/outcomes/teaching debate as a question of academic inquiry rather than external requirements can be quite powerful. Educators are researchers – no matter whether teaching economics, English, or third grade. So it makes sense that presented with data, educators begin to see their own teaching as an area worthy of research.

We presented the changes brought to a first-year engineering program, particularly having to do with research skills.

The challenge is one familiar to most educators: how do we teach students to value the library databases and scholarly resources available to them, and understand the differences between Wikipedia, Google searches, and corporate websites?

The consensus amongst a team of first-year humanities instructors, who taught an interdisciplinary first year covering composition, design and research, and literature, was that research skills could be taught more effectively. The process saw 30 sections of first-year engineers marched to the library in January for a 60 minute presentation by librarians on the library databases.

This approach was a classic catch-22: since the students were just beginning their research project, they didn’t have any vested interest in learning the ins and outs of ProQuest and LexisNexis. But when they did need the info, later in the term as they finished up length design proposals, it was too late to teach them. So these 50 minute sessions were often dry and difficult for students and teachers alike, even when the librarians tried to engage the students with examples of design projects past. Read the rest of this entry »