Wednesday, September 28, 2011

Peer Evaluation of Blogs

Justin Rogers-Cooper

Instead of assigning a new blog last week, I instructed students to read over a blog written by another student. I placed all students (in both ENG 101 classes) into pairs. I posted these pairs on my main course blogs (each main course blog is quickly becoming an announcement site, an assignment site, a reflection site, and also a blog site).

The assignment post was due Sunday evening, when all my blogs are due. On Tuesday and Wednesday, I went over two blogs with students in each of my classes. I made it clear that our discussions would focus on how the blogs were written. This meant evaluating what writing strategies students were practicing, determining how effective those strategies were, and then giving suggestions about how to make specific changes to the blogs.

In class, we first visually examined the blogs. I asked them what could we could tell about the blogs just from noticing things like whether or not there was a title, how dense the paragraphs were, whether or not students employed direct quotations, and whether or not paragraphs contained topic sentences.

Many of the features we observed then became the criteria for evaluation. Students understood that how we discussed blogs would directly inform how they responded to each other's blogs over the weekend. They would write sentences about topic sentences, unified paragraphs, providing context for outside readers, giving readers directions ("This blog is about...I am going to discuss..."), vague langauge, and keywords (and whether or not they're defined).

I then had the students practice writing their observations in the form of peer evaluation sentences: "Dear so and so, your topic sentences are focused on the main ideas in the paragraphs that follow. However your second sentence doesn't seem to fit. I suggest that you..." And so on.

Learning Objectives
Understanding, Analyzing, Applying, Creating

How Did It Go
When I left both my courses Tuesday and Wednesday, I was quite convinced that I had properly prepared students to assess each other's blog. Over the weekend, I had varying levels of resposes.

In my ENG 101: Language and Human Rights class, I had about 55% participation, which was much lower than I thought. Of those that participated, about 80% at least seemed to understand the expectations and what to comment on. In the remaining (few) cases, individuals seemed to post text-message type evaluations: short, unclear, and in text-speak.

In my ENG 101: Ethics of Food, I had a much higher rate of participation. I would estimate it at about 75%. Of those, I would say more than half met my expectations for comments. The rest left comments that either didn't meet my criteria or were off topic.

What struck me after this was that generating the criteria for analysis in class, with the students, didn't work as well I wanted. I wrote all the criteria on the board and instructed them to copy it in their notes. Nonetheless, many students chose not to use the categories of criteria when evaluating each other's work. Many fell into the cliche patterns of being too kind.

In the future, I'll need to be very explicit about how to comment on another person's blog. This will probably involve a separate assignment sheet. It's also clear that the students don't quite trust the common-sense link between us generating the categories for evaluation in class discussion and then applying those notions to each other's blogs. In class, I'll have to leave more time to actually write out a blog comment on the projector for them to see. I'll also have to leave more time for them to get started on the blog comment in class, like during a lab hour. I guess I shouldn't be surprised that ENG 101 students need much more definitive boundaries for their assignments, and less freedom to casually connect class discussion to out of class activities.

Every year I generate more and more explicit instructional "rules" for my assignments. I'm not sure if I'm becoming more and more like the public school system they are coming from; I'm not sure if what they're really learning to do is simply follow explicit instructions. And I'm also not certain if my role in ENG 101 is to teach them more critical thinking, or to teach them some critical thinking within the confines of these rule-based assignments.

What I believe worked was showing student blogs on the projector, and generating comments on those blogs using the same categories of criteria that we use to evaluate their longer essays.

To solve the participation issue, I believe I need to make sure students are connected to the class via Twitter. I honestly think that if I had their cell phone numbers and could text them reminders, I would see a slightly high participation rate. I know their lives are overwhelmed with health and work issues. I've already had several students attend funerals, miss time because of sick children, visit the hospital for ailments themselves, and complain of working "double-shifts" of 12 hours at work. I need to connect more with the students outside of class in order to grab their attention. And, oddly, doing so is a fresh reminder about just what kind of role education has to play in their lives. Or what role I think it has to, to teach them all. Can I really increase participation through another handout?

NOTE: I have tried to format this with less space between the paragraphs several times.


  1. No worries, Justin-- I got your back with the spacing. :-)

  2. JRC--The handout could help. Nothing wrong with carefully spelling out the requirements.

    On the other hand, maybe you should take into account that

    1. for most/many of your students, this may be the first time they have been asked to do peer eval.

    2. if we go by Bloom's Taxonomy (1956), doing peer eval. successfully is a demonstration of the highest level of learning.

    3. both sets of students are in learning communities; ergo, they know each other somewhat well by now and therefore may be reluctant to be honest about flaws because it could spoil budding relationships.

    And, for all we know, they may feel some kind of cognitive dissonance at having to formally evaluate something they consider informal--a blog. (Dunno about this one).

    So, you see, if we take any/some of the above into account, we could conclude that your peer eval. activity was actually quite successful.

    P.S. Later in the semester, you may want to consider dealing with #3 on the above list by having the two classes peer evaluate each other. In C2.0 we have found that a little distance does miracles for peer eval. You may also want to do the peer eval during class, so you can be there to guide those who need guidance.

  3. Hi Doctor X. (It's fun to write that.)

    Thanks for your thoughts. I actually am planning on having both classes cross-evaluate each other in a couple weeks. This was the "test run."

    I also plan to give them an evaluation feedback sheet, as per your idea earlier this semester for your class. I'm hoping that they'll then see that they're being evaluated for the quality of their comments, and that this will give them extra motivation for the cross-class eval.