Eli Review provides both course and individual level reports on feedback given and feedback received. This brief tutorial helps instructors use these four reports, presenting strategies and methods for coaching better commenting and revisions:
Class-level reports about Feedback Received (by writers)
Student-level reports about Feedback Received (by a writer)
Class-level reports about Feedback Given (by reviewers)
Student-level reports about Feedback Received (by a reviewer)
Theories behind coaching writing
Coaching is one way teachers can show students how to learn. When coaching, instructors tell students what they were looking for, what patterns they see, and what actions should follow from those insights.
Coaching engages students in metacognition*. While instructors debrief, students see and hear how habits of mind play out in real time.
Coaching is important “pre-game,” “in-game,” and “post-game.” As described in our detailed look at evidence-based teaching using formative feedback, pre-game and in-game coaching helps students make adjustments that improve their learning. Formative feedback is distinguished from post-game (summative feedback) in two important ways:
Timing: Formative feedback is provided while there is still time to make changes. Summative feedback comes at the end of a process through grades or test scores.
Quality: Formative feedback offers descriptive as well as suggestive advice. Summative feedback is normative—placing the writing/writer into a ranking system. Summative feedback may include suggestions, but since the time for improvement is past, students cannot act on them.
Eli Review focuses on formative feedback. It helps instructors with pre-game and in-game coaching so students practice skills before being graded on them. In Eli, writers get feedback that is timely, consistent, and actionable. This formative feedback for writers also provides formative feedback for instructors who are actively tracking learning while it’s happening.
Time is of the essence with formative feedback. Managing all these details in time for students to benefit from an instructor’s insights is challenging, even with analytics. The next two sections describe strategies for figuring out what to say as well as the best places to say it.
After students have completed a review, instructors need to know four things:
How much effort did students put into the review?
How well did they understand the criteria when evaluating others’ work?
How helpful were their comments?
What’s next?
Each review will not emphasize each issue equally. Pick and choose the issues that fit your learning goals. If you are thinking like a researcher in your own classroom, answering these questions with Eli’s analytics can help you design your coaching strategies.
How much effort did students put into the review?
Students’ engagement in the process of giving and getting feedback is an indication of learning.
To assess how much effort students put into a review, use these analytics:
Look at the completion rates in the review task report in Eli Review. This data helps instructors identify, for example, if the class a whole met the minimum expectation for the number of comments they gave.
Look at the comments totals. Use Engagement tab in the review to see how many comments were exchanged and how helpful students’ comments have been. This data can also help instructors assess whether students are getting-and-giving proportionally.
Check longitudinal trends. Use Roster or Analytics to get a sense of students’ engagement across multiple reviews. To see if students have a pattern of weak engagement, check the student’s course analytics.
These analytics indicate the level and nature of students’ efforts. These displays also exert positive social pressure. When instructors project them, students can see how their engagement in giving and getting feedback compares to the rest of the class. For an example of how to encourage longer comments, see the comment volume blog.
How well did they understand the criteria when evaluating others’ work?
Building students’ confidence about the value of peer review depends on developing a shared understanding of the criteria. Students need to trust themselves and their peers to correctly recognize when a draft meets criteria and when it doesn’t. Repeated opportunities to practice applying the criteria help students gain confidence that peer are giving them accurate feedback.
Within a single review, instructors can use the following analytics to explore how accurately students align their feedback with criteria.
Check peer-nominated models. For rating scales, Eli showcases the top three peer rated drafts. Read those drafts. Do you agree with students’ assessment of this draft on this criteria? If so or especially if not, explain why.
Check trait ids. For any draft, look at the trait identification sets (checklists), and ask:
How would you rate this draft using this checklist?
Did peer feedback match yours? If not, what do you need to help students understand?
Did peers agree (100% in each column) that this draft met each item in the checklist? If not, what do students need to understand to consistently rate the draft?
Check scales. For any draft, look at all the Likert/Star Scales.
How would you rate this draft using the scale?
Did peer feedback match yours? If not, what do you need to help students understand?
Did peers agree (100% in each column or an even spread in adjacent columns) about how this draft scored on the scale? If not, what do students need to understand to consistently rate the draft?
Check Comments. Look in students’ comments — the random ones displayed in the Responses to Writing/Feedback tabs, in the Student-Level reports, or in the downloadable comment digests — and see if students refer to the criteria for the assignment in their feedback.
Which criteria are students mentioning?
Which criteria would you like to see mentioned more often?
If they aren’t talking about criteria, why not? Is it because they don’t understand the criteria or because the prompt doesn’t ask them to respond to specific criteria?
By paying attention to how well reviewers understand criteria, instructors can increase the quality of peer feedback. If writers get and follow criteria-driven comments, their writing will improve.
How helpful were students’ comments?
Comments are a critical part of the feedback students use in Eli to plan revision. We have student materials and instructor materials that provide strategies for teaching helpful comments.
In Eli, helpfulness ratings are a way for writers to tell reviewers the extent to which their comments helped them revise. For helpfulness ratings to be powerful, writers need to be “stingy with the stars.”
If instructors have prepped students to give helpful feedback as reviewers and to rate helpful feedback as writers, Eli’s helpfulness rating analytics provide a way to monitor the quality of students’ comments:
Look for extremes. Check the Responses to Reviews tab. This data tells you average helpfulness. Click “See comment ratings by all reviewers” for a ranking of your class by helpfulness average. Figure out where the extremes are—the really strong and weak reviewers. Talk about what strong students are doing well and ways that struggling reviewers can improve.
Endorse exemplars. Look at the comments given by the really strong reviewers (see the highest rated reviewers in the Review Feedback tab). Endorse comments where appropriate (learn more about endorsement).
Look for judicious helpfulness ratings. From an individual student’s report of writing feedback or review feedback, was the writer being fair when rating the comments for helpfulness? For the 5-star scale to be useful, most comments should get 3 stars. Are writers being judicious in their use of 1 star and 5 stars? Talk with students about how to rate feedback more appropriately.
Make templates. Often, the hardest part about giving feedback is knowing how to phrase it, so templates are useful. In our resources for student writers, we encourage Bill Hart-Davidson’s “describe-evaluate-suggest” pattern of giving feedback, but many other templates are possible. From strong comments, write a template reviewers can use next time.
What’s the next step?
Good coaches know how to use analytics to inform what they do in class. (Want a basketball analogy? Read this blog post.) After instructors examine students’ efforts, their alignment with criteria, and their helpfulness, instructors have to decide what to do.
Consider a double-take on the review. Was the review feedback so misaligned with criteria that reviewers need to do it over? In the debrief, teach reviewers how improve. Then, create a new review task, load the same review from your task repository, and assign it. Students won’t lose their feedback from the first review, and they’ll gain another round of improved feedback.
Consider not responding directly to individual students. Has the success of peer feedback and of debriefing given students enough to work on? Can students get to the next stage of drafting without the instructor’s comments?
If yes, don’t do more than endorse the best comments.
If no, decide which criteria to comment on. Instructors shouldn’t repeat a coaching comment to the whole class while coaching individuals. Per student feedback should do different work than the peer review and debriefing.
Consider a revision plan update. Students build revision plans from the comments they’ve received. The debriefing discussion may give writers new ideas about revision. Students can add more comments to the Overall Note area of their revision plans. This gives them a place to write down what they learned from the whole class coaching session.
Methods for Delivering Coaching Advice
Coaching requires knowing about content, skills, motivation, and expected changes in performance over time. Coaching is also about finding the right time and place to talk with students about their performance.
1. Convey insights about the whole class to all students.
With Eli’s real-time analytics, instructors can decide to address students’ performance through
Extemporaneous commentsdelivered while students are working through the review and before they leave to work on revision plans (see Ann Shiver-McNair’s Live Feed article)
Prepared comments delivered after students have completed a review task but before they’ve submitted revision plans
Spoken in class or in synchronous online environment
Spoken via screencast
Written via class blog, e-mail, announcement, etc.
2. Convey insights to individual students.
With Eli’s aggregated view of all the comments writers received and reviewers gave, instructors can address students’ performance through
Tap on the shoulder. Instructors who are co-located with students while they are working on reviews can watch the live feed of selected comments in order to find students who are struggling. They can then quickly chat with the student, without interrupting the whole class. The equivalent is possible in email too.
Writing notes in Eli’s revision plans. In revision plans, instructors primarily respond to a writer’s selection-prioritization-reflection on the feedback they’ve received. Instructors can also give students advice about their work as reviewers.
Conferencing. Although we think conferences are best used to address the feedback students have received over the course of multiple reviews rather than a single review, talking with students privately outside of class is always a good way to get their attention.
Digging Deeper
Formative feedback is messier than summative feedback, and pre-game coaching of students’ work as writers and reviewers is challenging. Contact us if you’d like assistance interpreting the trends in your courses. If you’re looking for other tutorials, consider:
Blog posts about working with Eli Review’s analytic data.
Other resources for learning about teaching with Eli Review include:
[resource-table]
You can also find human support to help you learn how to use Eli Review effectively. You can contact Melissa Meeks, Eli’s director of professional development, and:
get a tutorial about how Eli works and how it might work in your classroom;
schedule a series of conference calls throughout the term;
identify one or two instructors to work closely with Melissa to design a library of tasks for your institution and then to mentor other instructors on-campus.
Have any additional questions about how to use Eli Review? Contact us at [email protected].