Writers’ Toolkit for Microsoft Word…the FREE app for your English class

September 9, 2010

11trees is proud to announce The Writers’ Toolkit for Microsoft Word 2007 and 2010.

The 11trees Writers’ Toolkit is both a free website of resources and a powerful Microsoft Word add-in that unifies important tools for writers including:

  • The web’s best resources on grammar, style, organization, argument, research, and citation formatting
  • Tools like a thesaurus, citation management, citation formatting, word count, spell check, and basic grammar check
  • Embedded Google Scholar search and candid advice on how to write compelling academic essays (and get better grades)

These resources appear inside Microsoft Word, so a writer can access them as they create.

>> Download and Install the Writers’ Toolkit


Annotate for Word 2007 PRO v2.0 is HERE…

January 17, 2010

We’ve sold hundreds and hundreds of Annotate PRO licenses over the last year, and the free version has been downloaded over 10,000 times. We’re proud of what we accomplished, but we always knew that easy customization was critical to a broader adoption, potentially across different ‘industries.’

The ability to change the buttons, and the content behind them, has been our number one feature request. Sure, all versions of Annotate make it much easier to leverage Microsoft Word’s built in automation (called AutoText in versions up through Word 2003, and QuickParts in Word 2007). But like many aspects to Word, these features are clunky.

For Word 2007 only we’ve gone ahead and done it: a user can customize every Group label, 90% of the buttons, add new buttons, and change the underlying content that appears with a single click.

Watch a 5 Minute Demo of Annotate for Word 2007 PRO v2.0

If you want to change the URL linked to a particular comments? Easy. Click, click, click – and for ever after the new URL appears. If you want to change the description for a particular issues, and/or the button label itself? No worries…a few clicks, and the content is updated. There are 70 built-in comments, and another 150 blank comments that can be tailored to specific uses. The information is stored locally, in a database, and can even be backed up or switched out, so that a user could keep multiple databases of comments (for different courses, say).

We’ve also confirmed that Annotate for Word 2007 works with Windows 7, and we’ve even got users running Annotate with Office 2010 Beta.

Since Annotate for Word 2007 brings such advanced features to editing and managing documents, we’ve dropped the price of our Word 2000/XP/2003 and Mac Office 2004 versions to $25. These versions make it easy to leverage a bank of custom comments, but they don’t allow full customization of the Annotate button library.

We’re looking forward to refining v2.0 and adding sister versions of the software designed for office applications. Stay tuned!

> Video Demo

> Learn More

> Customer Reviews


The Ultimate ‘Outcome’

January 17, 2010

We often use the example of assessing driving skills in faculty development workshops.

We like the example because most of us feel that we know a good driver when we see one (and that we’re good drivers!), but it is an immediately difficult skill to assess. Try developing a rubric for driving skills…

There are also lots of regional peculiarities that help personalize the conversation. Examples:

  • Jug handles in New Jersey
  • Ice/snow driving in Maine
  • Requirement in Connecticut to verify that you are not a child molester when purchasing/operating a van (!)

In one such workshop, a teacher made a terrific assessment insight: they argued that observing how a driver acted at a suburban stop sign would tell you a lot about their driving skill and attitude. Do they:

  • Screech to a halt?
  • Roll through the stop sign?
  • Ignore the “if two cars arrive at the same time to a 4-way stop, the car to your right goes first” rule?
  • Let the car’s momentum end, then accelerate away after looking both ways?

This ‘outcome’ trumps a lot of minute detail that ‘experts’ try to build into such assessments (remember, everyone thinks they know how to drive, so their rubrics are immediately complex).

Last night I saw the documentary film In A Dream, which contains an even higher level form of outcome: if you were trying to measure parenting skill, imagine what would become evident if an adult child of the parent created a documentary. And captured on film the father abandoning the mother for another woman. After 40 years of marriage.

It’s a great documentary, available on DVD. And how more subjective can you get than parenting over a lifetime? The results require thinking and analysis on the ‘assessors’ part – obviously any filmmaker will have an agenda, a viewpoint, and edit footage for certain effect. But the film feels like an authentic, balanced portrait and is intensely moving.

Assessing a presentation, or a collage, or a term paper should seem fairly simple in comparison. So rather than worry about multiple criteria measured nine different ways, think about what leading indicators are appropriate for your students and the task at hand.


Who’s The Better Teacher?

May 28, 2009

As we long suspected:

“At the most celebrated institutions of higher education in the United States, the teaching quality of the adjuncts is many times better than that of those on the tenure tack.”

Inside Higher Ed didn’t pull any punches in their review of Off-Track Profs: Nontenured Teachers in Higher Education from the MIT Press. The book, just out, reports results on a study funded by the Mellon Foundation that looked at adjuncts teaching at 10 leading research institutions. The authors, university administrators themselves, had seemingly total access to data and personnel.

As important as the finding about ‘quality teaching’ (more on this in a moment), is the study’s analysis of the drivers behind the growth of adjuncts. It isn’t just the cynical need to save money. In fact, the decision to place a course with an adjunct results from many factors. This makes sense, since few university administrators have any sort of cost-cutting philosophy to their leadership. If they did, many aspects of university departments would change before an increased hiring in adjuncts.

As usual, the definition of ‘better teacher’ is based entirely on course evaluations completed by students. Which means the results are worthless. Anti-adjunct (or anti-data) partisans will reject the findings out of hand. And they’d be correct to do so, although the conclusion feels right to us. It would have been interesting to see the authors of the study bounce their data against RateMyProfessors.com data…but that’s another story.

The whole illogical mess is another reflection on the emotional and cultural decision making that drove the financial melt-down. Is the goal to educate students? Or is the goal to bring in research $ and publish obscure texts (the article casually mentions that the course load for “many tenured professors has fallen from four to three a year.” THREE COURSES A YEAR?

Is the goal to sell as many mortgages as possible? Or to make sensible loans that will actually be paid back?

The whole dynamic runs very close to a terrific new piece from Atul Gawande in The New Yorker this week. We’ve blogged about Gawande before; he is taking on the 30,000 foot issues in medicine with an eye for detail and counter intuitive conclusions that are obvious once pointed out. We feel similar work should be (and could be) done in education. Read the rest of this entry »


Outcomes Assessment and Grades

February 16, 2009

A couple of recent articles in Inside Higher Ed caught our eye – one on grades and grade inflation, and the other on the creation of the National Institute for Learning Outcomes Assessment.

It seems obvious to us that grading and assessment are largely the same thing. Barring sampling programs, or initiatives designed to assess program outcomes (aggregating student results rather than considering the success of individuals), grading IS assessment.

It’s just that the typical grade (A-, B+ etc.) is an extraordinarily blunt instrument.

Imagine reading a car review (okay, bad example – who is reading car reviews anymore?) or a film review that is simply a letter grade. Many reviews feature letter grades, but they come after a thousand words of measured criticism. And it is subjective criticism, but we largely accept the skill of a Roger Ebert and take their points seriously. They are assessing the film, and they do it through a narrative response built upon well-established criteria.

Education is even messier than film reviewing, because the letter grades awarded are all over the place. To draw the analogy out a little farther, imagine trying to pick a movie to see from the following:

  • 20 films, all rated B+ or higher (with no narrative or other information)
  • 20 films, each rated four times by separate reviewers, where the individual grades are all over the map but the averages are still B+ or higher

You wouldn’t know which film to see…and likewise our system of letter grades is useless for assessing knowledge. Read the rest of this entry »


Annotate for MAC Users

January 26, 2009

Thanks to some very excited English teachers and a constant stream of emails from interested Mac users, we now have a version of Annotate for Word that runs on Mac Office 2004.

Yes, that was five years ago, but progress is progress.

With the latest release of Mac Office (2008), Microsoft dropped support for Visual Basic for Applications, which is what we wrote the older version of Annotate with (the Word 2007 version is written using .NET). But just as with Word 2007 on the Windows side of the world, many teachers haven’t bothered upgrading. So there are still a lot of Word 2004 computers out there, and we hope to help a lot of those computers help their owners create better feedback for their students.

The free version of Annotate for Word 2004 for Mac Office 2004 (we haven’t figured out a more elegant way to name the thing that is also specific enough) isn’t quite ready, but we’ve got a number of PRO users going already. So don’t hesitate to be in touch…

http://www.11trees.com/annotate-for-word.html


Effective Peer Review: Leveraging the Learning Management System

November 30, 2008

Introduction

Peer review is a widely accepted practice, particularly in writing classes, from high school through college and graduate school. The goal of peer review is typically two-fold:

  1. To help students get valuable feedback at the draft stage of their work.
  2. To help students more deeply understand the goals of the assignment.

Unfortunately, peer review is often used as a busy-work activity, or a process that takes advantage of conscientious students while allowing others to do superficial work. For instance, many teachers will hand out a list of peer review questions in class, and then give students 30 minutes to review two papers written by their colleagues. An open-ended question might be:

  • “Did the writer adequately summarize and discuss the topic? Explain.”

Many students will write “Yes” under this question and move on. Without review by the instructor (difficult to do when many instructors have 50 to 150 students), these students can destroy the social contract of a peer review. Other students will spend a lot of time making line edits to the draft – correcting grammar, making minor changes to sentences etc. At the draft stage this is probably inappropriate – the focus should be on ideas and big-picture organization, not embroidery. Plus, some students aren’t qualified to be dictating where the semicolon should go.

Students aren’t alone in having these problems; in 1982, Nancy Sommers published her highly influential piece, “Responding to Student Writing,” in which she commented about how little teachers understand the value of their commenting practices, and that, essentially, they don’t know what their comments do. She raised numerous long-standing points in her evaluation of teachers’ first and second draft comments on papers.

Two of her major findings:

  1. Teachers provide paradoxical comments that lead students to focus more on “what teachers commanded them to do than on what they are trying to say” (151).
  2. She found “most teachers’ comments are not text-specific and could be interchanged, rubber-stamped, from text to text” (152). One result is that revising, for students, becomes a “guessing game” (153). Sommers concluded by saying, “The challenge we face as teachers is to develop comments which will provide an inherent reason for students to revise” (156). Read the rest of this entry »

Follow

Get every new post delivered to your Inbox.