This document offers an in-depth breakdown of the different response types instructors can add to a Review Task. There are a couple of important points to keep in mind when thinking about review task response types:
In a trait identification response, the prompt names a trait that you wish reviewers to identify in their peers’ work. You might, for example, create a prompt that says: “The article summary begins with a full citation of the article.” Reviewers respond by checking a box to indicate if the trait is present or absent in the draft they are reviewing.
Trait identification items offer writers formative feedback the helps them prioritize next steps for revision. The absence of required elements lets writers and instructors know what they might work on in the next draft. It also helps to indicate where they are meeting expectations when important traits are present.
Trait identification elements are also aggregated for the whole class, which helps a teacher to see what kinds of issues to prioritize in discussion with the larger group. If your review results page shows that 95% of your students have a full citation in their article summary, for instance, you know you can likely move on to work on something that a smaller percentage of students have mastered or prioritized in their drafts.
Scaled response items are questions that the teacher asks the reviewer to respond to on a scale. The prompt may frame the scale for students right in the question: “On a scale of 1 to 5 with 1 being lowest and 5 highest, how confident are you that the article citation is in correct MLA format?”
Scaled response items are the most versatile, and they can be used to provide feedback ranging from formal to informal at just about any stage in the process. A teacher asking students to write limericks to practice meter and rhyme-scheme for instance might frame a number of scaled items to vote for a fun round of “Best in Class Awards…” in a number of humorous categories: best rhyme, best meter, funniest, etc. Or, you could use scaled items to replicate criteria found on standardized tests such as the ACT Writing Exam in order to do test prep.
Rating scales are aggregated for the whole group so the teacher can see class averages and compare the results of individual students with those of the whole group. Eli identifies “peer exemplars” by showing the instructor those students whose work scores highest on scaled items. This provides you with an instant set of possible models and peer scaffolding opportunities. You can ask the large group to look at an example ranked highly, for instance, and discuss the choices the writer made that have caused it to be a strong draft in a particular area. Best of all, these are examples nominated by students. So even when the teacher disagrees, revealing the trend creates a “teachable moment.”
Keep in mind about scales:
A likert scale functions much like a rating scale but with a few important distinctions. In this case, it’s a scale in which case the prompt is typically framed as a declarative statement to which the respondent is asked to select one from a set of pre-defined choices (strongly agree, agree, etc). Options presented to respondents (called “likert items”) are usually balanced on either side of a neutral item to give the set a balanced spectrum.
Keep in mind about likert scales:
Comments are open-ended text responses from reviewers, similar to what a reviewer might write in the margin or at the end of a draft. Comments can be attached to specific passages in a text for assignments that have been typed or pasted into the Eli editor, or they can be made as global suggestions to the writer. Instructors can prompt reviewers to offer specific types of feedback with a comment response if they like. For instance, a teacher might say “please offer comments on claims that could use more evidence to strengthen the writers’ overall argument.”
Comments are often the most valuable bits of information for the writer, because they can offer concrete revision suggestions as well as confirmation that they are on the right track. With Eli, all comments from all reviewers are collected in one place, making it easy for a writer to strategize a revision plan.
Instructors can view comments from reviewers as they come in during a review. In a face-to-face classroom, this is very helpful when you are trying to monitor whether reviewers are attending to the appropriate criteria at the appropriate stage of the writing process. If an instructor sees early on that reviewers are focusing on grammar and mechanics too early in the process, for instance, she can intervene and redirect their focus to higher-order concerns such as development and arrangement.
Instructors can also endorse a comment by clicking a button. This allows the writer to see that the instructor thinks a particular comment is valuable, and it also allows the reviewer to see that she has made what the instructor feels is a helpful response.
Keep in mind about contextual comments:
Final comments function much like Contextual Comments, but in this case students may only leave a single comment. For assignments that include more than one product – for example, a resume and a cover letter may be turned in for review at the same time – a Final Comment option gives students the ability to make a global comment about both documents.
This is a simple text box for responding to all of the products in a holistic way. Teachers can use this response, too, to address broader issues of purpose and audience. For instance, a prompt for a final comment to a resume and cover letter might ask “what are the most interesting qualifications an employer might choose to highlight or ask questions about in an interview with the writer?”
Keep in mind about Final Comments:
Back to top