Who’s The Better Teacher?

May 28, 2009

As we long suspected:

“At the most celebrated institutions of higher education in the United States, the teaching quality of the adjuncts is many times better than that of those on the tenure tack.”

Inside Higher Ed didn’t pull any punches in their review of Off-Track Profs: Nontenured Teachers in Higher Education from the MIT Press. The book, just out, reports results on a study funded by the Mellon Foundation that looked at adjuncts teaching at 10 leading research institutions. The authors, university administrators themselves, had seemingly total access to data and personnel.

As important as the finding about ‘quality teaching’ (more on this in a moment), is the study’s analysis of the drivers behind the growth of adjuncts. It isn’t just the cynical need to save money. In fact, the decision to place a course with an adjunct results from many factors. This makes sense, since few university administrators have any sort of cost-cutting philosophy to their leadership. If they did, many aspects of university departments would change before an increased hiring in adjuncts.

As usual, the definition of ‘better teacher’ is based entirely on course evaluations completed by students. Which means the results are worthless. Anti-adjunct (or anti-data) partisans will reject the findings out of hand. And they’d be correct to do so, although the conclusion feels right to us. It would have been interesting to see the authors of the study bounce their data against RateMyProfessors.com data…but that’s another story.

The whole illogical mess is another reflection on the emotional and cultural decision making that drove the financial melt-down. Is the goal to educate students? Or is the goal to bring in research $ and publish obscure texts (the article casually mentions that the course load for “many tenured professors has fallen from four to three a year.” THREE COURSES A YEAR?

Is the goal to sell as many mortgages as possible? Or to make sensible loans that will actually be paid back?

The whole dynamic runs very close to a terrific new piece from Atul Gawande in The New Yorker this week. We’ve blogged about Gawande before; he is taking on the 30,000 foot issues in medicine with an eye for detail and counter intuitive conclusions that are obvious once pointed out. We feel similar work should be (and could be) done in education. Read the rest of this entry »


Outcomes Assessment and Grades

February 16, 2009

A couple of recent articles in Inside Higher Ed caught our eye – one on grades and grade inflation, and the other on the creation of the National Institute for Learning Outcomes Assessment.

It seems obvious to us that grading and assessment are largely the same thing. Barring sampling programs, or initiatives designed to assess program outcomes (aggregating student results rather than considering the success of individuals), grading IS assessment.

It’s just that the typical grade (A-, B+ etc.) is an extraordinarily blunt instrument.

Imagine reading a car review (okay, bad example – who is reading car reviews anymore?) or a film review that is simply a letter grade. Many reviews feature letter grades, but they come after a thousand words of measured criticism. And it is subjective criticism, but we largely accept the skill of a Roger Ebert and take their points seriously. They are assessing the film, and they do it through a narrative response built upon well-established criteria.

Education is even messier than film reviewing, because the letter grades awarded are all over the place. To draw the analogy out a little farther, imagine trying to pick a movie to see from the following:

  • 20 films, all rated B+ or higher (with no narrative or other information)
  • 20 films, each rated four times by separate reviewers, where the individual grades are all over the map but the averages are still B+ or higher

You wouldn’t know which film to see…and likewise our system of letter grades is useless for assessing knowledge. Read the rest of this entry »


Effective Peer Review: Leveraging the Learning Management System

November 30, 2008

Introduction

Peer review is a widely accepted practice, particularly in writing classes, from high school through college and graduate school. The goal of peer review is typically two-fold:

  1. To help students get valuable feedback at the draft stage of their work.
  2. To help students more deeply understand the goals of the assignment.

Unfortunately, peer review is often used as a busy-work activity, or a process that takes advantage of conscientious students while allowing others to do superficial work. For instance, many teachers will hand out a list of peer review questions in class, and then give students 30 minutes to review two papers written by their colleagues. An open-ended question might be:

  • “Did the writer adequately summarize and discuss the topic? Explain.”

Many students will write “Yes” under this question and move on. Without review by the instructor (difficult to do when many instructors have 50 to 150 students), these students can destroy the social contract of a peer review. Other students will spend a lot of time making line edits to the draft – correcting grammar, making minor changes to sentences etc. At the draft stage this is probably inappropriate – the focus should be on ideas and big-picture organization, not embroidery. Plus, some students aren’t qualified to be dictating where the semicolon should go.

Students aren’t alone in having these problems; in 1982, Nancy Sommers published her highly influential piece, “Responding to Student Writing,” in which she commented about how little teachers understand the value of their commenting practices, and that, essentially, they don’t know what their comments do. She raised numerous long-standing points in her evaluation of teachers’ first and second draft comments on papers.

Two of her major findings:

  1. Teachers provide paradoxical comments that lead students to focus more on “what teachers commanded them to do than on what they are trying to say” (151).
  2. She found “most teachers’ comments are not text-specific and could be interchanged, rubber-stamped, from text to text” (152). One result is that revising, for students, becomes a “guessing game” (153). Sommers concluded by saying, “The challenge we face as teachers is to develop comments which will provide an inherent reason for students to revise” (156). Read the rest of this entry »

Sustainability – high school students care…

September 26, 2008

Like a lot of bloggers and teachers interested in technology and education, I’m a geek. It makes perfect sense to me to do just about everything I can with technology. I’m very fast with a computer, and while my physical desk might be a pile of (seemingly) disorganized papers, my computer is always immaculately organized.

Okay, my inbox gets a bit crazed, but that’s why Xobni invented Xobni.

So I’ve been collecting student work in electronic format for a long time – mostly because it seems so inelegant to walk around with 80 student papers (and a pain when commuting on my bike) when I could have each students’ work stored neatly in a folder on my hard drive. I always have access to my comments and student work, can collect the work in the first place via Blackboard, and don’t have to worry about misplacing student work etc. etc.

I remember the moment that I realized teachers might have another reason for collecting work electronically. I was talking to a technology guru at Carleton College, and asking about workflow and teacher adoption of their elearning platform, Moodle. I have this conversation a lot, and it usually highlights one of the the dirty secrets in higher education: low utilization rate for elearning tools. There are budgets, entire staffs dedicated to technology, but few faculty use tools like Moodle or Blackboard to any great extent. Maybe they post a few files, an announcement or two… I have visited (prestigious) schools where full-time “instructional technologists” will actually scan a professor’s hardcopy syllabus and place the PDF into the appropriate Blackboard course. And that single document is the only resource in the course. For an entire term.

Of course there are the power users – geeks like me, and large introductory classes that make use of online testing (and automatic grading).

But the technology guru at Carleton surprised me. She told me that most faculty collect work electronically and review it on their computers. I was surprised, and asked why. She told me that the faculty had changed their workflow for environmental reasons. They changed their ways to save paper.

I was not surprised, then, to discover that Carleton College was rated in the top ten of over 300 colleges and universities ranked in the 2009 College Sustainability Report Card, published by the nonprofit Sustainable Endowment Institute. The report card covers many areas of an institution’s operations, and the endowment (what the school invests in etc.) and commitment to sustainability (evidenced by an ‘office of sustainability’ and a full-time person directing said office) are major components.

A teacher’s actions, however, are far more visible to students than complex and long-term investments. What struck me the most about the news coverage of the report card (the media loves a ranked list!) had to do with high school students’ attitudes: “Sixty-three percent of 10,300 college applicants recently polled by the Princeton Review said that a college’s commitment to the environment could affect their decision.” Since the report card  didn’t touch on elearning or academic technology, I take the Carleton faculty’s commitment to be a cultural expression of the school’s larger commitment and an indicator that they earned their ‘A-.’