The Eli Review Blog

New tutorials, research, teacher stories, and app updates - Subscribe via our newsletter!

Bad Feedback Happens, Part II

This post is a part of the series Challenges and Opportunities in Peer Learning.

In peer learning, students improve by talking to each other about their work. Student success depends on good conversations – they should be timely, accurate, and helpful. Bad feedback—comments that shut down a writer’s ideas, choices, or motivation—limits success. Helpful feedback—comments that open up a writer’s ideas, choices, and motivation—increase success. It takes practice for students to talk effectively with other about their work – it doesn’t just happen.

Instructors who are successful at creating a feedback-rich culture in their classrooms make sure that writers get as much helpful feedback as possible. They have to teach reviewers to give it.

In the previous post on Bad Feedback Happens, we explain how teachers can learn valuable things from students’ responses to one another. The displays, metadata (helpfulness ratings, endorsements, add to revision plan), and downloads emerge from one of the core the beliefs driving how Eli works. In this video, Eli co-inventor Jeff Grabill explains this premise that “you can’t teach what you can’t see.”

Through these features, Eli helps instructors (and students) see comments and the indicators of their effects on revision. Seeing comments makes it easier to model exemplar feedback and identify bad feedback. By working more closely with what students say to each other in peer learning, instructors can improve the conversation and students’ success.

In this post, we elaborate on three strategies for teaching helpful feedback. We offer a dozen activities for using those three strategies for designing tasks, class activities, and assessments.


Part 2: Bad Feedback is a Teachable Moment

Strategy: Emphasize describe-evaluate-suggest as a pattern for comments.

The first strategy in teaching helpful feedback is direct instruction. Asking students to read about the characteristics of helpful feedback and telling them what they need to do isn’t enough, but it’s the place to start.

Eli’s approach to effective comments is the “describe-evaluate-suggest” pattern, which Bill Hart-Davidson explains in the professional development resource “Designing Effective Reviews” and in our student resources listed below. The pattern gives students a very clear framework for how to give feedback:

  • Describe – say what you see as a reader
  • Evaluate – explain how the text meets or doesn’t meet criteria
  • Suggest – offer concrete advice for improvement

The pattern helps students know how to talk to each other about their work. And, it gives instructors a quick way to guide conversations about comments.

We also offer several free and open student resources aimed at direct instruction of helpful feedback:

  • Feedback and Improvement is a 10-minute article with a few videos, and we recommend it as homework ahead of the first day of direct instruction. It explains to students why both giving and receiving feedback contributes to learning. It also addresses why everyone is capable of offering helpful feedback when their comments “describe-evaluate-suggest,” a pattern Bill Hart-Davidson explains in a 1-minute video.
  • Describe-Evaluate-Suggest is the 4-minute video explanation of the pattern, and it works well to introduce the direct instruction. This video is also a good resource to refresh reviewers’ attention after a few weeks of giving feedback.
  • Giving Helpful Feedback is a 15-minute article that addresses all the parts of a review (checklists, ratings, and comments). It includes multiple examples of students’ comments so that the class can practice rating comments for helpfulness.

Here are additional activities that reinforce describe-evaluate-suggest:

  • In the contextual comment prompt, ask reviewers to label the parts of their comments. Inspired by one of her students who added (D), (E), and (S) to the respective part of the comment, Jamie Hopkins at Montcalm Community College now includes that guideline in her contextual comment prompts.

Please offer at least 1 contextual comment per review.

Highlight any words or sentences that catch your eye. After highlighting, choose “add comment.”

Write a comment that extends the writer’s thinking. Describe what you see happening in the passage, evaluate how compelling this passage is, and then suggest a way to improve. Use the questions above to guide your feedback.

For inspiration, here’s an example comment from a previous review (a different assignment) that follows describe-evaluate-suggest:

D – Here you provide a basic explanation of the peer review process, which will probably sound familiar to most students. E – However, if the idea is to build a “case” for peer review, I think we need some rationale for all the time we’ll expect students to dedicate to this process. S – Maybe you could discuss what research tells us about the value of peer review (when it’s done well). Perhaps you could also acknowledge some of the negative “baggage” some students might be carrying regarding peer learning (hitchhikers, I never get good feedback, etc.).

  • In the contextual comment prompt, include a model comment or template. Putting an effective comment in the review task can help with two kinds of problems. First, a real example clarifies expectations quickly, which lowers the cognitive load of a review. Second, it also supports those who are unsure what to say.

Offer 3 comments to help the writer improve this essay. Describe what you see in the draft, evaluate that section against the criteria, and suggest a strategy for revising.

Here are example comments to inspire you:

(1) This paragraph has 4 sentences on 1 side and 1 on the other. That makes it seem unbalanced. Try to write equal amounts about each side.

(2) This paragraph talks about _____. I don’t understand what you mean because ______. It would help me if you could explain _____.

(3) This part uses an extremely broad brush, and I have trouble believing your analysis because of this generalization. Consider adding something like ______ that will acknowledge the limits of this claim.

(4) Before you start the details here, try adding a very brief summary of the whole source. I feel thrown into the deep end, and I need a slower start to lead me toward these very specific ideas.

(5) You’ve spent about 10 sentences going over the facts. I don’t see much comparison and contrast, which is what the bulk of the paper should be about. I think you should cut this part in half, and add a new subtopic for your analysis such as ___. That addition will help you go beyond the obvious.

(6) In this part, I can tell you prefer ____. Since this paper is about presenting both sides equally, take out ____. Replace it with ideas like _____.

  • Interrupt review to model helpful comments. When review is happening in a synchronous context, instructors can use the “Writing Feedback” tab to see a live feed of comments being exchanged. When a helpful comment appears, instructors can stop the class momentarily, point out how the comment follows “describe-evaluate-suggest” and then ask students to imitate that peer-exemplar during the remaining time.
  • Calibrate writers on helpfulness ratings. Chad Walden at Montcalm Community College makes a list of 10 comments from a review. Then, he asks individual students to rate the helpfulness of each comment based on how well it follows describe-evaluate-suggest. Next, small groups get consensus. Finally, the class gets consensus. Getting consensus around how many stars each comments should get helps students feel more ownership over the expectations.
  • Ask reviewers to reflect. At the end of a review, students can check the “Feedback Given” tab to see the comments they gave other writers. They can identify a comment and explain its strengths and weaknesses in a quick reflection writing assignment.

As our materials and activities suggest, “describe-evaluate-suggest” works well because it helps reviewers compose comments and writers rate helpfulness. By using the pattern in direct instruction and follow-up class activities, instructors instill a habit for helpful commenting.

Strategy: Treat the comment digest as a learning record.

Our earlier post Your Rubric Asks Too Much pointed out that drafts are noisy signals of learning. Drafts can go wrong in a lot of ways, and they can go right for a lot of reasons unrelated to writers’ skills. Few students cheat on comments by asking someone else to do the work, use robots to clean up their prose, or go to the writing center to get extra help on giving feedback (wouldn’t that be lovely?). We can see what they are capable of in student comments. Comments are the “show your work” in writing.

We can see what they are capable of in student comments. Comments are the 'show your work' in writing.

The comment digest download is a treasure trove of information. In Know When to Shut Up, we suggested that reading through reviewers’ comments can help instructors better understand students’ needs. Here are several specific questions to answer using the comment digest to see learning:

  • Are individual reviewers stuck in a rut or making a breakthrough? This question can be answered by looking at the “Feedback Given” tab for individual students in a review task. It’s an easy sort in the spreadsheet too. Reading through all the comments a reviewer gave lets instructors see what reviewers know to look for and to say about others’ work.
    • Sometimes reviewers will say the same thing to every writer, which might not be appropriate. Those reviewers need help to get out the rut.
    • Sometimes reviewers will say something to another writer that they need to hear themselves.  Those reviewers need a nudge to take their own good advice.
  • How often do reviewers use the language of the review task assignment in their comments? Which words that you use are students also using? Ryan Omizo’s computational research at the University of Rhode Island explores this question with some cool data visualizations. A simple counting formula [=Countif(range,”thesis”)] in a spreadsheet works well enough for informal, evidence-based teaching, though the risk here is that students are parroting the term rather than applying it thoughtfully. Or, as you skim through the list, note what they say and how they say it:
    • What are reviewers saying that you are glad they are saying?
    • Can you echo the way they say it in your debriefing and in the way you comment on drafts?
    • What are reviewers not saying that you wish they would say? How can you get them to say it?
  • Is the balance of global revision suggestions and local editing suggestions appropriate for this stage of the writing process? In the spreadsheet, add a column for “Revision Goal,” and categorize each comment as “global” or “local.” Then, use a pivot table to summarize. You’ll see how much feedback is aimed at each goal.
  • How is comment length affecting helpfulness? In the spreadsheet, sort by word count and then by helpfulness rating. You can eyeball it, or build a pivot table to summarize. There’s no magic comment length, but there may be a trend line you can announce as a peer norm. For example, in one of my classes, any comment longer than 22 words was 3 stars or higher in terms of helpfulness; shorter comments had only a 50% chance of being helpful. I explained to reviewers that writers needed enough information in the comment to understand what they meant, and that turned out to be about 22 words of explanation, which is at least one sentence. Those guidelines—22+ words, full sentence—help reviewers monitor their feedback. Also, you might find like I did that the reviewers writing the shortest comments had other issues—motivation, skills, time management. Comment length turned out to be a proxy metric of student engagement. If there’s a pattern of short comments, something larger is happening with students that affects their learning.

For teacher-researchers, trend analysis in comment digests is one of the most promising areas for three reasons:

  • Instructors can pull the data themselves—single review or all reviews.
  • The data is clean—clearly labeled, easy to work with.
  • The data is much smaller than drafts—manageable.

Strategy: Assess feedback.

Another way to encourage helpful feedback is to assess it. We don’t simply mean grade feedback, though we have suggestions for that too. Giving students feedback about their feedback is important for improving the skill that will help them as writers and as leaders.

The Conference on College Composition and Communication’s Position Statement on Writing Assessment explains in its first guiding principle that

Writing assessment is useful primarily as a means of improving teaching and learning. The primary purpose of any assessment should govern its design, its implementation, and the generation and dissemination of its results (2014).

Assessment is a method of gathering evidence and then analyzing it in order to determine if teaching improved learning. To assess improvement, instructors need to see change from a less desirable performance to more desirable performance. To teach for improvement, they have to know how groups of students with different characteristics go from one to the other. Assessment is “seeing learning.”

At Eli, we often use the phrase “see learning” to build on Jeff’s point that instructors can only teach the problems they see. We also use that phrase to indicate three more things:

  1. See more. Eli captures, displays, and exports the work writers AND reviewers are doing in peer learning. That’s unique. With other technologies, comments are siloed in files, which means reviewers’ work is hard to piece together. With common composing programs, comments are “resolved” and thrown away as writers revise, making it impossible to trace how specific comments lead to revision decisions. Eli gives instructors access to a rich archive of the artifacts students produce in write-review-revise cycles. Those artifacts are contextualized by other metadata, which lets teacher-researchers study how comments were valued and used by writers and instructors.
  2. See engagement. Eli makes crystal clear who is not doing any work and who is doing too little. In “No Pain, No Gain,” we explored ways of getting students to participate in deliberate practice. Missing tasks, too few comments, and low helpfulness ratings are big red flags that students aren’t engaged enough to improve.
  3. See improvement. Eli is designed for teaching better feedback and better revision. Literally, the design of the app gives instructors more features and data about comments and revision plans—parts of the writing process that usually only get lip service. Most writing technologies focus on showing improvement in drafts using features like calibration and scoring rubrics, which are possible in Eli but miss the point. As a pedagogy backed by technology, Eli’s argument is that instructors have little basis for assuming that an improved draft is evidence of a writer’s better habits and skills unless that writer can also offer helpful comments to their peers and explain a cogent revision plan. Better feedback, better revision, better writers.

Our recommendation that instructors “assess feedback” is an invitation to do a new thing: Use the tools and downloads in Eli to design a new method of capturing and analyzing evidence to determine if teaching helpful comments led to better feedback. The app and our resources make the shallow end of “helpfulness ratings” easy, but the deep end is mapping how students learn to give better feedback.

One of our K-12 teacher-researcher groups in Michigan has been working on this question for a few years. They’ve designed a developmental continuum to describe how reviewers grow their skills over time. In the ASCD Online Express newsletter, they discuss their rationale and share what they’ve developed for students to use in self-assessment and for instructor assessment. As a model of assessing feedback, it shows what to look for in reviewers’ comments in order to identify skill level and then suggests what reviewers need to practice in order to move up a level.

Level Description Leveling Up
1 Acting primarily as copyeditors, reviewers label parts of draft, correct mistakes, and/or offer opinions not clearly related to writers’ purposes.
  • understand purpose
  • describe claims and evidence
  • explore ideas
  • limit editing in early drafts
2 As partners in meaning-making, reviewers help writers reshape claims and evidence. Reviewers accurately restate writers’ purposes and ideas, identify strengths, and recommend strategies for improving weaknesses.
  • notice how the parts work (or don’t work) together
  • say why some parts are strong and others are weak
  • recognize most criteria
  • focus comments on higher-order concerns
3 Relying heavily on instructor’s guidelines, reviewers offer writers meaningful revision suggestions. Reviewers align feedback to criteria. They may occasionally misidentify parts of the draft or misevaluate how well the draft meets expectations. Their comments may restate criteria and often repeat the same suggestions.
  • label the parts of drafts
  • notice how organization, tone, sentence structure, and diction achieve purpose
  • recognize all criteria
  • pose reflective questions about claims and evidence or about the writer’s craft
4 As fellow writers, reviewers help peers understand how their choices affect readers. They consistently and accurately identify parts of the draft and evaluate how well the draft meets expectations. They ask writers thoughtful questions that often go beyond the instructor’s guidelines. They offer suggestions that reveal a developing awareness of how writers achieve a particular voice.
  • describe the writer’s argument in ways that refine the claim and evidence
  • describe the writer’s style in ways that refine it
  • pose questions that invite writers to think in new ways
  • weigh ideas and styles from multiple perspectives within the intended audience
5 Working as developmental editors, reviewers help writers identify and polish their core ideas through rigorous attention to audience, purpose, claims, evidence, organization, and style. Comments consistently go beyond the instructor’s guidelines. Reviewers help writers productively address the complex rhetorical situation.
  • zero in on the most important changes a writer should consider
  • frame feedback to connect criteria, conventions, and the unique rhetorical situation
  • suggest several strategies for improving argument and style

The point of assessments like this one is to emphasize their growth, not to penalize poor performance on a single review. In You’ve Totally Got This, we discussed the importance of recognizing how hard it is give good feedback and how much practice it takes. While it makes sense to grade students’ effort on each review (completion, etc.), any assessment of reviewers’ performance should be take a portfolio approach, spanning multiple reviews. Eli makes that easy because the comment digest for the entire course can be downloaded from the Analytics tab, then instructors can view all comments or filter within a specific time frame.

By assessing feedback, instructors take the next big step in treating the comment digest as a learning record. Assessing comments within a single review help instructors make just-in-time interventions. Assessing comments across multiple reviews helps instructors better understand how students move from giving bad feedback to helpful feedback.

Bottom-line: Bad feedback is a teachable moment.

Bad peer feedback is good feedback for instructors. When students can’t talk to each other about their work, it is an indication they are struggling to learn the skills that will help them improve. By explicitly teaching a pattern for helpful comments such as “describe-evaluate-suggest,” instructors can help more students give helpful feedback, which has the spillover effect of writers getting helpful feedback. By paying attention to trends and patterns in comments within a single review, instructors can elevate the conversation students are having about their work. By developing new ways to assess feedback, we can better understand how to keep bad feedback as a short-term rather than chronic condition in peer learning.

Read Part 1 of Bad Feedback Happens, Do you want bad feedback to be a short-term or chronic condition?

References

  1. CCC Executive Committee. 2014. “Writing Assessment: A Position Statement.” Conference on College Composition and Communication. http://www.ncte.org/cccc/resources/positions/writingassessment.

Challenges and Opportunities in Peer Learning

This blog series addresses the design challenges instructors face in creating rich peer learning environments. Those dilemmas feel familiar: finding the time, motivating students, worrying that they have the skills to help each other learn. But, we—the Eli Review team of Jeff Grabill, Bill Hart-Davidson, Mike McLeod, and Melissa Graham Meeks—approach these dilemmas from what might be an unfamiliar place. We understand more peer learning, done well, as the solution to motivation, skill development, and learning. In the classes we teach and in our work as a company, peer learning is routine and powerful.

  1. Your Students Aren’t Revising Enough
  2. Know When To Shut Up, Part I
  3. Know When to Shut Up, Part II
  4. Your Rubric Asks Too Much, Part I
  5. Your Rubric Asks Too Much, Part II
  6. Debrief Every Time, Part I
  7. Debrief Every Time, Part II
  8. No Pain, No Gain, Part I
  9. No Pain, No Gain, Part II
  10. You’ve Totally Got This, Part I
  11. You’ve Totally Got This, Part II
  12. Making a Horse Drink
  13. Bad Feedback Happens, Part I
  14. Bad Feedback Happens, Part II
  15. Give One, Get One
  16. The Secret Sauce of Producing Better Writers

Cover photo credit: Mikka Em

The Secret Sauce of Producing Better Writers was published to the Eli Review Blog in the category Pedagogy.

Continue Reading Eli Review Content

Want to keep reading?

Eli Review Free Trial

Ready to try Eli Review?

Sign up for a Professional Development Workshop to learn how Eli works! Sign up now!