The Eli Review Blog

New tutorials, research, teacher stories, and app updates - subscribe via RSS or our newsletter!

Your Rubric Asks Too Much, Part I

This post is a part of the series Design Challenges and Opportunities in Peer Learning.

This blog series has argued that the best learning is a function  of the right practice at the right time. Instructors must carefully time when they offer expert feedback and when they coach peer learning. To elevate the conversation around students’ drafts, instructors need a clear sense of how that conversation matures over time.

In this two-part post, we will explain why rubrics can overload learners when used too early or as the only intervention. Designed for summative feedback, a rubric fits at the end of a writing process. A peer grading activity at the end of a process using the rubric works well so long as students have gotten formative feedback along the way. At the start of the process, learners need a scaffold that helps them do the intellectual work that precedes the writing assessed by the rubric. In short, early in the process, students need scaffolding (guided in part by a rubric). In part one, we offer a MadLibs routine for developing the scaffold; in part two, we describe strategies for getting clear evidence that students are learning.

Part 1: Are learners on the right path?

Common strategies for designing assignments, such as those found at Faculty Focus, emphasize goals, genres, and components. Usually, there’s advice about having a good assessment or grading plan such as recommendations by Heidi Goodrich Andrade for designing instructional rubrics. These are necessary decisions. Big ones. For this post, we’re interested in the smaller decisions that determine if students can execute these bigger goals.

Design Challenge: We know the destination better than the journey.

Learners start from different places, and the destination instructors have in mind is hard for students to fathom. A description of the destination is not the same thing as a description of the journey. A grading rubric is not a sufficient scaffold of learning. It only describes the destination.

Moving students—a highly differentiated group—from wherever they are to where we want them to be means sequencing skills or writing moves carefully. (We like “moves” because we see the goal as helping writers build a flexible repertoire.)  The Boise State group who inspired this blog series called this problem “finding the learning crux”: What is the most important thing that will help students get on the right path from wherever they are starting?

The learning crux is not obvious, even to experienced instructors. Plus, performance is noisy. In a draft, it’s hard to distinguish a signal that learning is happening from noise that should be ignored or discounted. Consider these signal-noise problems:

  • Sentence-level control slips as writers grapple with complex ideas. Are the poorly formed sentences a signal of a pattern that requires tutoring or are they just noise that will clear up as the writer keeps going?
  • In literature reviews, drawing connections between sources requires both comprehension and rhetorical skill. Is the writer who submits a list of source summaries in paragraph form signaling that they comprehend enough and are now ready to do synthesis writing? Or is that performance just noise indicating that they didn’t invest enough time and energy to follow the instructions?

These decisions about signal versus noise inform our evaluation of individual students and our sense of the ideal sequence of skills.

Two of the best indicators of expert teaching are distinguishing signals from noise and drawing appropriate conclusions from those signals.

Two of the best indicators of expert teaching are distinguishing signals from noise and drawing appropriate conclusions from those signals. All teachers must learn how to do this. That’s why a big rubric makes knowing what to teach next more problematic (too many things of more or less equal importance). That’s why teaching a course for the first time is so hard and why it doesn’t get much easier unless we can zero in on reliable evidence of student learning.

Peer learning holds the promise of surfacing more indicators of learning more quickly. For effective peer learning, instructors have to know how to capture and interpret evidence of improvement throughout the journey. The design challenge is to understand the journeys different students take, the rally points along the way, and the differences between signals and noise in student performance.

Design Opportunity: Get high-quality snapshots of learning early and often.

Bill Hart-Davidson and I have written about the challenge of sequencing skills as the work of finding learning indicators. A learning indicator is the signal that points to something else. Instructors need to be able to identify these “pointers.” Jeff Grabill describes an indicator as a slice of information: a behavior, an attitude, or a cue that learning is happening. In and of itself, a learning indicator is not that spectacular, but it indicates a current state and may point to improvement on the horizon. A learning indicator shows you if students are on the right path.

A short, well-designed activity focused on a learning indicator yields a very valuable snapshot for who needs what kind of help. Bill explains this pedagogical approach with this anecdote:

I was volunteering in my daughter’s kindergarten classroom. The teacher led a peer learning activity designed to help students grasp arithmetic concepts in math: counting, matching numerals with their equivalent number of objects, and distinguishing odd and even numbers.

The students were working in pairs at small tables. Each table had a deck of cards with a number from 1-12 and a stack of red and blue poker chips.

The teacher watched quietly, circulating among the tables as students took turns drawing a card, turning it face up, and then laying out the number of tokens to match the numeral printed on the card.

I had instant admiration for the teacher when I saw how the lesson unfolded. She asked students to use red for odd numbers and blue for even numbers, which gave her an immediate (and stealthy!) way to see which students grasped this idea.

She could also see who was struggling with counting by watching how quickly they set about laying out their tokens. She had clear learning indicators! She found a way to externalize and distinguish among all the concepts she wanted to teach using the colored tokens!

As soon as she noticed a table had two strong students, she would call for the students to pause and she’d ask them to switch partners. She would guide students stronger in one concept to tables with a student who could benefit from a more capable peer to demonstrate. And then the “game” would resume.

After about 15 minutes, the lesson ended. All of the students gained confidence. What really impressed me is that, with this exercise, the teacher had created a very clear, individualized picture for herself of who needed additional help in three specific pre-arithmetic concepts.

What’s the equivalent of colored tokens for signaling improvement in student writing along the journey to a final draft? Our team at Eli Review doesn’t think there’s anything quite so simple for teaching college English or even college Algebra. Instead, as Bill explains, we admire this instructor’s pedagogical feats:

  • isolating each signal by assigning a unique visual indicator (color for odd/even, number of tokens for count);
  • boosting signal strength by moving from a one-to-many network (teacher-silent students working alone) to a many-to-many network (multiple partners talking in multiple rounds of deliberate practice), which makes each student’s strengths and weaknesses compared to peers more clear as they talk through the activity; and
  • reducing background noise because reshuffling students allows her to be sure that it isn’t the dynamic of the partnership but confusion over the concept that is causing students to struggle.

These goals for peer learning activities lead to the development an effective scaffold. Isolating the intellectual skills that precede polished performance, boosting how clearly that skill is seen during practice, and reducing noisy distractions allows instructors to get a clear sense of who needs what kind of help. We think we can help teachers of any subject get a high-quality snapshot of learning like the  Kindergarten token-counting example by finding learning indicators using this Mad Libs routine:

  1. When I see X, I know my students have learned Y.
  2. I see X best in Z.
  3. To see more X in Z, I need ask students to do A, B, and C.
  • X = learning indicator
  • Y = small bit of writing
  • Z = learning outcome
  • A, B, and C = criteria or steps

Within a “small bit” writing assignment and focused review activity, we’re confident that instructors can isolate learning indicators, boost the signal, and reduce the noise such that they’ll be able to see whether students are on the right path and moving in a good direction.

In the previous article, we shared several examples to demonstrate that learning indicators are quite small:

Learning Outcome Learning Indicators
Writing a technical definition
  • Relative detail (similar to)
  • Contrastive detail (different from)
Organizing a multi-paragraph essay
  • Topic sentence points backward to thesis
  • Topic sentence points forward to details in the paragraph
Deeper reflections
  • Claims of fact (what happened)
  • Claims of value (why it mattered)

Succeeding on the small learning indicator doesn’t necessarily mean success on a big rubric. But an error in the small learning indicator guarantees failure on a big rubric.

In fact, students’ errors are essential sources of learning indicators because they guide how we should coach students. That’s Mina Shaughnessy’s legacy in composition studies. Errors and Expectations taught us that our best source of learning indicators in students’ written drafts is the way they make errors. In their call for “A Formative Assessment System for Improving Writing,” Nancy Frey and Douglas Fisher use this same line of thinking to distinguish

  • mistakes, which students are capable of correcting with more time (noise);
  • errors, which are consequential missteps in thinking or writing that require instructor intervention (signals).

If instructors can see an error and understand the learning problem behind it, they can teach the correction. The key is two-fold: know what to look for and be able to see it in the data.

If instructors can see an error and understand the learning problem behind it, they can teach the correction.

The pedagogy our team is advocating for centers on high-quality snapshots of student learning early and often. The app backs this approach through analytics and downloads for evidence-based teaching, but the pedagogy rests firmly on instructors’ choices. Eli works for peer grading with big rubrics, but it is much more powerful as a way to scaffold student learning.

We call the pedagogy “rapid feedback cycles” or “building feedback-rich classrooms.” We mean designing frequent peer learning activities around learning indicators so that instructors can detect signals of learning (i.e., consequential missteps) in three types of student work:

  1. Drafts where writers reveal the limits of what they can do with ideas and words
  2. Comments where reviewers reveal the limits of what they can say about others’ ideas and words
  3. Plans where writers reveal the limits of how they can apply feedback and pick up new strategies

If instructors teach in response to errors in one or more of the types of student work (and more is better), learning improves. The only way to go this deep is to also go narrow. A big rubric might be the destination, but it is not a learner’s journey.

Bottom-line: Deliberate practice of sequenced skills works.

A learner’s journey is slow, effortful, and circuitous. Activities that zoom in on one or two learning indicators and look for those cues in writing, comments, and planning make learning feel faster, easier, and more linear. These small, repeatable activities create the conditions for deliberate practice.

Deliberate practice turns novices into experts. Ericsson, who originally studied musicians and chess players, defined deliberate practice as “activities that have been specially designed to improve the current level of performance” (368). The challenge is sequencing the practice of narrow skills. In their article “Training Advanced Writing Skills: The Case for Deliberate Practice,” Kellogg and Whiteford argue:

[T]he necessary coordination and control cannot succeed without reducing the relative demands that planning, generation, and reviewing make on working memory. The writer cannot flexibly and adaptively coordinate planning, generating, and reviewing when the needs of any single process consume too many available resources. The writer cannot be mindful of the whole while struggling with the parts (emphasis ours). Training through deliberate practice would appear to be the only way to provide the writer with sufficient attention and storage in the working memory system to cope with the demands of advanced composition. (255)

Students’ need for mindful practice is real. Peer grading with a big rubric once or twice won’t help much. Students’ cognitive load problem related to attention and storage is real. Big rubrics can’t help much. Instructors’ signal-noise problem of evidence-based teaching is real. Big rubrics make that problem worse.

Designing activities around learning indicators can help. It keeps the cognitive load low for students, and it simplifies interpretation for instructors. The MadLibs routine for learning indicators is a method for isolating a signal, boosting it, and reducing noise.


Part 2 of Your Syllabus Asks Too Much, Reducing the Noise, will be available on 9/1.

Photo credit: Bill Smith of Byzantium Books on Flickr


  1. Ericsson, K. Anders, Ralf T. Krampe, and Clemens Tesch-Römer. 1993. “The Role of Deliberate Practice in the Acquisition of Expert Performance.” Psychological Review 100 (3): 363–406. doi:10.1037/0033-295X.100.3.363.
  2. Frey, Nancy, and Douglas Fisher. 2013. “A Formative Assessment System for Writing Improvement.” English Journal 103 (1): 66–71.
  3. Goodrich Andrade, Heidi. 2000. “Using Rubrics to Promote Thinking and Learning.” Educational Leadership 57 (5): 13–18.
  4. Kellogg, Ronald T., and Alison P. Whiteford. 2009. “Training Advanced Writing Skills: The Case for Deliberate Practice.” Educational Psychologist 44 (4): 250–66. doi:10.1080/00461520903213600.
  5. Shaughnessy, Mina P. 1979. Errors and Expectations: A Guide for the Teacher of Basic Writing. 2. printing. New York, NY: Oxford University Press.

Like this post? Want more like it?

Subscribe to the free Eli Review newsletter to get our latest updates about teaching and technology in your inbox!

Thank you! Check your inbox for a confirmation that you want to subscribe.

The post Your Rubric Asks Too Much, Part I was published to the Eli Review Blog in the category Pedagogy.

upward arrow button