Menu

Eli Review Learning Resources

Using Eli to Build Criteria-Driven Reviews

Jeff Grabill talks about criteria-driven review.

————
Jeff explains how Eli makes him a better teacher of review. With Eli, his review tasks are shorter, more precise, and criteria-driven.

Build focused, specific review prompts.
http://www.macmillanhighered.com/Catalog/uploadedImages/Content/BFW/Slide-Show_Demos/BSM_Marketing/Melissa_Meeks/4%20stdt_vw_wrtg_data.jpg
————–
Review tasks in Eli look like surveys. So, building review prompts means designing good survey questions about one or two goals/criteria at a time. Your aim is two-fold:

(1) Scaffold review so that reviewers read, rate, and comment on the draft like you would.

(2) Design review prompts that will lead to data you and writers can use to gauge performance and make revision decisions.

Turn criteria into checklists.
http://www.macmillanhighered.com/Catalog/uploadedImages/Content/BFW/Slide-Show_Demos/BSM_Marketing/Melissa_Meeks/checklist.jpg
————–
Trait identification response types are checklists. Reviewers decide whether the draft has the desired

* components
* characteristics, and/or
* coverage.

Turn rubrics into Likert scales.
http://www.macmillanhighered.com/Catalog/uploadedImages/Content/BFW/Slide-Show_Demos/BSM_Marketing/Melissa_Meeks/likert.jpg
————–
When you want reviewers to make fine-grained distinctions about how well a criterion is met, use Likert scales.

If you already use a rubric, your rubric’s categories become your scale.

Use stars to find exemplars.
http://www.macmillanhighered.com/Catalog/uploadedImages/Content/BFW/Slide-Show_Demos/BSM_Marketing/Melissa_Meeks/star_rating.jpg
————–
Star ratings invite students’ to make intuitive evaluations.

In the class report, the two highest rated writers’ drafts will be available so that you can model from these peer exemplars’ work.

Use contextual comments for direct feedback.
http://www.macmillanhighered.com/Catalog/uploadedImages/Content/BFW/Slide-Show_Demos/BSM_Marketing/Melissa_Meeks/contextual(1).jpg
————–
Contextual comments are anchored in words, sentences, or phrases in drafts that have been pasted into Eli. These work like “insert comments” in Microsoft Word, and writers can add the comments they receive to their revision plans.

You can include only one contextual comment prompt per review, but that prompt might include several questions.

Use final comments for overall feedback.
http://www.macmillanhighered.com/Catalog/uploadedImages/Content/BFW/Slide-Show_Demos/BSM_Marketing/Melissa_Meeks/final_comments.jpg
————–
Final comments allow reviewers to reflect on the whole draft.

Writers will have the opportunity to add reviewers’ feedback to their revision plans.

Ask multiple ways.
http://www.macmillanhighered.com/Catalog/uploadedImages/Content/BFW/Slide-Show_Demos/BSM_Marketing/Melissa_Meeks/ask_twice.jpg
————–
Asking multiple ways about the same criteria gives you a way to gauge reviewers’ consistency. Researchers call it triangulating data.

Build momentum.
http://www.macmillanhighered.com/Catalog/uploadedImages/Content/BFW/Slide-Show_Demos/BSM_Marketing/Melissa_Meeks/layer.jpg
————–
Ask a series of leading questions to help reviewers be discriminating when they rate the draft and comment.

By layering the review, you help reviewers draw conclusions about drafts’ strengths and weaknesses.

Sequence for success.
http://www.macmillanhighered.com/Catalog/uploadedImages/Content/BFW/Slide-Show_Demos/BSM_Marketing/Melissa_Meeks/divide_reviews.jpg
————–
By now, you know the importance of short, focused review tasks. There’s value in connecting those tasks too.

This particular task shows how reviewers’ predictions based on a close reading of the introduction can be compared with a reverse outline they complete in the next review when they respond to the whole draft. These two activities direct reviewers to purposefully read and re-read the draft, offering writers meaningful feedback each time.

upward arrow button