Review is the crucial step in Eli Review. Reviewers give feedback based on criteria and writers get the feedback they need on their writing tasks so that they can do better revision.
As an instructor, it’s important to think of yourself as the review coordinator – you
establish what’s to be reviewed
define the criteria and ways reviewers give feedback to writers
Debriefing can happen during and/or after review begins. You can use the real-time data to model effective feedback. After reviews are complete, you can model peer exemplar drafts and guide students in creating revision plans.
Eli’s review tasks allow instructors to devise guidelines for peer feedback that include checklists, ratings, and comments.
Step 1: Define Review Details
The first thing you’ll do is define some crucial contextual details about the review. Specifically:
Review Title: Memorable and specific titles are easier to remember in future terms.
Weak: “Review of Draft”
Stronger: “Review of 1st Draft of Literature Review”
Start Date: Specify the date and time when students can begin giving feedback.
Due Date: Specify the date and time when students must complete giving feedback.
Instructions: An optional field where you can frame the learning objectives for giving feedback. This section appears like a cover page before students begin giving feedback to drafts.
Privacy: Show names or Classmate #. Important things to consider with anonymity:
You, as the instructor, will always know who said what to whom. This option only conceals identities for students.
Eli cannot detect for student names include their names in the draft itself in the MLA header, for example. If you intend an anonymous review, be sure to instruct them NOT to put their names in their writing task submissions.
Loading Existing Review Tasks
Instead of creating a review from scratch, you can “load from library” to borrow and customize a task from Eli’s curriculum or from any Eli course in which you’ve been an instructor or co-instructor.
The details above in Step 1 as well as those in Steps 2 and 3 will likely need to be updated, but Step 4 Response Types will be complete. The repository display will allow you to browse all of the review tasks you’ve created in the past, even as part of another course. When you’ve found the task you want to clone, just click the “Load” link and the task form will be populated with the settings for that task.
Next, you’ll define the writing tasks to be reviewed.
You can select up to 5 tasks.
For example, you might select:
Slides and report
Infographic and write-up
Cover letter and resumé
Two Writing Tasks of the same assignment
writing task composed in Eli – the composed in Eli version gives reviewers the ability to highlight the words in the draft before commenting.
writing task uploaded to Eli – the uploaded version allows them to see formatting like images or bibliographies.
Step 3: Reviewer Groups
This step puts students into groups. Eli requires reciprocal peer learning.
Students assigned to a group with 3 other peers will be required to give feedback to all 3 peers BEFORE they can get the feedback they’ve been given by those peers.
Because students cannot “complete” a review until they’ve given feedback to all their peers, it’s important to only put writers with submitted drafts into peer groups.
The Automatic grouping type and option to Exclude late writers from groups enforce this expectation.
We refer to writers who have not yet submitted a draft as “late” writers. Late writers are a particular challenge for coordinating feedback. Reviewers grouped with late writers will see “waiting” as the status of their review task because they cannot “complete” their review until their peer “completes” the writing task. See our tutorial for strategies on managing late writers or accommodating late writers.
Grouping Types
Eli has two grouping types: Automatic and Manual.
The Automatic grouping option shuffles students into peer review groups on a scheduled review Start Date.
Importantly, Automatic grouping only forms groups of on-time writers, students who complete a review’s associated writing tasks before its Start Date. The resulting peer review groups consist exclusively of students who are ready to begin the peer learning activity, so on-time writers can begin working without waiting for late group members to complete the writing task.
Automatic grouping has two settings:
Each writer will give and get – define the number of peer reviews each writer should receive. Eli will create as many groups as necessary with students randomly placed throughout.
Late Grouping – schedule a second round of automatic grouping for late writers who complete the writing task after groups were automatically formed on the review Start Date.
The Manual grouping option provides 3 ways to create groups:
Add Group – use the “Add Group” button to manually create as many groups as needed and then drag students from the “Ungrouped” list into the desired group.
Shuffle Into Existing Groups – click this link to shuffle all students in the “Ungrouped” list into the existing groups. The groups will shuffle each time the button is pressed.
Specify # of reviews each writer gets – select the number of peer reviews each writer should receive. This will create as many groups as necessary with students randomly placed throughout.
Important things to keep in mind about groups:
Rearrange before assigning: You can always drag-and-drop students between groups if you don’t like how Eli has paired them, or if you want to distribute your strongest reviewers more evenly through the groups.
Rearrange after assigning: You can drag-and-drop students to a different group to solve problems after a review has begun. For example,
Trait Identification: These are checklists of minimum requirements, features, learning goals, common errors, etc.
Rating Scale: This is a scale from 1-10 stars with the option to allow reviewers to explain their rating.
Likert Scale: This a multiple choice question with the option to allow reviewers to explain their rating. A row in your rubric is one Likert scale.
Comments: Comments are open-ended text responses from reviewers, similar to what a reviewer might write in the margin or at the end of a draft. Comments can be attached to specific passages in a text for assignments that have been typed or pasted into the Eli editor, or they can be made as global suggestions to the writer. Instructors can prompt reviewers to offer specific types of feedback with a comment response if they like. For instance, a teacher might say “please offer comments on claims that could use more evidence to strengthen the writers’ overall argument.”
Final Comment: A final comment is one larger box for reviewer feedback. This is useful for having students offer global comments about the entire work or to reflect on the experience of the text, rather than respond to specific questions.
Important things to keep in mind about response types:
Every review must have at least one response type, at a minimum.
There is no limit to the number of response types a review can have. You may add as many trait identification sets, rating scales, or Likert scales as your learning goals require.
Trait identifications, Star ratings, and Likert rating allow for “Tallied Responses,” which show instructors the percentage of peer votes each draft earned compared to the possible peer votes. Tallied Response provide an indication of how well the draft meets criteria.
Scaled response types (both ratings and Likert) allow you to ask students to “Explain Your Response,” which both you and the writer will see after the review.
For more about the affordances of each response type, and some tips on designing your prompts, see our response type tutorial.
Previewing / Saving as Draft / Assigning / Editing
You can use the buttons at the bottom of a review to
Preview: See the task as students will see it. Previewing is a great way to proofread your assignment.
Delete: You can delete a draft. You can delete an assigned review in which no student has submitted work. You CANNOT delete a review in which a student has submitted work.
Save as Draft: Tasks saved as drafts aren’t visible to students, and you can keep revising drafts until you are ready to assign the task. Eli doesn’t have autosave, so be sure to to save your task as you go.
Schedule Review Task: Schedule the review task to begin on the review task start date.
Assign Now: Assigned tasks are visible to students. As soon as the first student begins work, the task locks, and you can only change the due date.
2. Review Report – Class-Level Data
Eli’s feedback analytics are unique, giving instructors multiple quantitative and qualitative to view, sort, and analyze both drafts and writing.
Once students submission to the review begin, Eli makes available a series reports about reviewer progress toward completion as well as writer and reviewer performance. This section outlines the review report summarizing performance for the entire class and the next section describes the report for individual students.
Each section of these reports is designed to provide instructors with:
Real-time, live review results – at any given time, Eli will let you know how far along students are with the review. This data is generated in real-time, which means that you can measure progress and see what students are saying to one another even before they complete their reviews. This affords many pedagogical interventions, including the ability to stop reviewers in the middle of a review task and model effective responses.
Surfacing “peer exemplars” – Eli will identify both writers and reviewers that students are responding to favorably. You can use this data to model effective responses or to critique.
Task Overview Display
The first display in the review report is an overview display that will help you coordinate the review, including tools that allow you to do the following:
Reminder of response types – an overview of the structure you created for students and how they’ll be responding.
Make edits – you’ll be able to alter the details and response types right up until the point where students have begun submitting reviews, after which you’ll only be able to edit the due date.
Rearrange groups – you can move reviewers between groups or move them out of groups altogether based on their reviewing ability or their completion (or lack thereof) of the writing tasks under review.
Anonymity toggle – also sometimes called “projector mode” which will allow you to conceal the identities of students while you project the report, perhaps to model a reviewer’s feedback.
You can navigate between the other sections of the report using the navigation tabs.
Responses to Writing
The “Responses to Writing” tab provides an overview of how the entire class is responding to the writing being reviewed.
This is an aggregate display, tracking not individuals but class-level performance. From here, you’ll be able to:
Monitor progress toward completion – you’ll see this represented as a percentage of reviews complete versus reviews remaining.
View Tallied Response data – If you checked the box for “Tallied Responses,” you’ll see a table displaying the percent of peer votes drafts received.
Assess data from quantitative response types – Likert scales will be represented as tables, showing responses to each individual item, while rating scales will show the class average.
Identify “peer exemplars” among writers – the data for rating scales will include links to the profiles for the three students rated most highly by their reviewers. Clicking on an exemplar’s name will take you that student’s writing report where you can click on the link to view their draft. Discussing the draft as part of debriefing can help students learn from each other.
View comments – This display will surface a random sampling of feedback given to writers, allowing you to assess the overall tone of how students are responding to one another. You can refresh browser window to get a new random set.
This report will continue to evolve as students complete their reviews. Projecting it during a review and using it as a status board, including calling reviewers or writers out by name, can help reviewers feel good about their contributions and model helpful comments.
Responses to Reviews
The “Responses to Reviews” report is a display of how writers have responded to the reviews they’ve received. This report is meant to help identify the most helpful reviewers as well as the most helpful individual comments given by reviewers.
Review statistics – you’ll see a breakdown of several data points regarding review, including:
Total number of comments made
Average number of comments per reviewer
Percentage of comments rated by writers
Average feedback writing of all comments
Highest-rated reviewers – Eli will identify the two reviewers rated most helpful by the writers they reviewed and will include a short summary of their performance and a sample of their comments.
Breakdown of all reviewer ratings – an optional report will display a complete breakdown of all students, sorted by the helpfulness ratings given to them by the writers they reviewed.
Highest-rated comments – Eli will select a sample of the most highly-rated comments from this review and display them here. They can be used to model effective feedback or to identify particularly helpful reviewers.
As with the “Responses to Writing” report, this report will evolve as students complete their reviews. It is also made more useful if students rate their feedback, which can be encouraged by sharing this data with them – the more feedback is rated, the better the report.
Engagement Data
The Engagement Data section of the review report provides data about reviewer activity that can be helpful when coaching them in how to give better feedback. Instructors may choose to use this data in their evaluations, but this quantitative data is meant to give insight into reviewer behavior that might help improve the helpfulness of their feedback.
Intensity – how many words reviewers gave and writers received
Reciprocity – the ratio of comments given/received and words given/received
Responsiveness – how comments received by writers have been rated, endorsed, and added to revision plans
Included in this report:
For both comments given and comments received by a reviewer:
Count – the total number of comments given and received for each reviewer
Words – how many words students wrote and received in feedback
% Rated for Helpfulness – the percentage of comments that have been rated for helpfulness by the recipient
Helpfulness Score – the average helpfulness rating (on a scale of 1-5) of the comments given by the reviewer and the ratings given to the comments the reviewer received*
Endorsements – the number of endorsements instructors gave to the student’s comments
Plans – the number of comments each student added to their revision plans
Ratio of comments given by a reviewer to the comments they received
* helpfulness calculations ignore comments that haven’t been rated – Eli does not assume the lack of a rating by the recipient implies anything about their perceived value of a comment
This report is truncated to fit into a standard browser window and is focused specifically on comments. The Engagement Data Download provides significantly more data about reviewer engagement.
3. Review Report – Student-Level Data
Where the class-level reports provide data about student performance in the aggregate, the individual student report provides a breakdown of a single student and their work in a review. Clicking on a student’s name (or their anonymized label) anywhere in the review report will open that student’s individual report.
The Student Summary Report shows the student’s completion status, including submission times, for each assigned review as well as the engagement data.
Each student report has links mirroring the class-level reports:
Writing Feedback – a complete breakdown of the feedback this student got from reviewers about their own writing.
Review Feedback – a complete breakdown of all the feedback this student gave to the writers they reviewed.
The data in these reports resembles the data found in the class-level reports, but here it is focused specifically on student performance. Both reports – whether listing feedback the student received or feedback the student gave, will be displayed in a similar format. In both cases, you’ll see:
Unlike the class-level reports, however, these student reports offer detailed information about every comment received or given by that student. The report includes for each comment:
Full text of the comment
A link to view the comment in context of the writing (when applicable)
Endorsing an individual comment is a way for you, as the instructor, to send two messages at once:
To the writer: Your endorsement encourages writers to pay attention to the comment and to consider adding it to their revision plan.
To the reviewer: Your endorsement lets reviewers know that you think their feedback was particularly helpful.
Students will only see when comments have been endorsed, not when they haven’t been endorsed, so use of this feature is at the instructor’s discretion. See our endorsement tutorial for detailed examples of using endorsements strategically.
While most data generated by reviewers is accessible through this review report, teachers and teacher researchers often prefer to work with raw data when conducting research. In some cases, the typical browser window is too small to allow a satisfying / usable display of all the data Eli makes available.
To that end, there are two download options for review data: the comment digest and the engagement data report.
Report Formatting – .csv files
Both data reports will download as CSV (comma-separated values) files. This type of file can be imported into almost all spreadsheet and database applications, making it possible to sort and query the data in any number of ways.
This report is an extension of the data available in the Engagement Data tab of a review report. In includes one row per review participant, and each row contains the following data:
reviewer name
number of assigned reviews completed
% completion of the review
date of completion
final comment made (yes/no)
final comment word count
ratio of comments given to comments received
number of comments given as a reviewer (count)
number of comments given rated for helpfulness by recipients
% of comments given rated by recipients for helpfulness
average helpfulness score of comments given
number of comments received from reviewers
number of comments received rated for helpfulness by the writer
% of comments received rated by writer for helpfulness
average helpfulness score of comments received
Things to note about how Eli assembles the Engagement Data report:
reports correspond to the number of enabled features (i.e. reports that don’t use the “final comment” feature will not have such data in the corresponding report)
numbers are rounded to the 3rd significant digits
comments without a rating are excluded from comment totals and helpfulness – Eli does not assume that a lack of a rating implies a choice on a recipient’s part and does not hold that against a reviewer
This download collects all of the quantitative feedback generated during a review. Reviewer responses to criteria sets, rating scales, and Likert scales are all included in this download. Specifically, it includes the following information:
name of the reviewer and the reviewed student
name of the feedback component
component prompt (what the student was asked)
the reviewer’s rating or choice selected
the reviewer’s explanation for their rating or choice (if present)
* The Criteria and Scale Data download is only available for reviews that have these features added..
The comment digest is a report compiled as reviewers complete their reviews. It is a collection of all of the comments exchanged between reviewers and the quantitative data about those comments*. Instructors can download digests of individual reviewers or of all the reviewers in a course.
A comment digest includes one record for every comment exchanged between reviewers, and each record contains the following data:
names of the comment author and the recipient
full text of each comment
date and time of comment submission
comment word count
comment helpfulness rating assigned by recipient
instructor endorsement (yes/no)
* The Comment Digest is only available for reviews in which contextual comments were enabled.