Last year, I addressed some common myths about ungrading and why they are, well, myths. There are lots of persistent myths and misunderstandings about alternative grading as a whole. For some reason, some of those myths are most commonly phrased as complaints about, or even attacks on, the people who use alternative grading. I find this type of person-focused myth especially obnoxious, and so today I’ll address them head on.
Myth: Ungraders are lazy instructors who just don’t want to grade
This first myth focuses on the word “ungrading”, which – just a reminder – is woefully ill-defined. The actual meaning of ungrading doesn’t really matter here, because it’s the very word that causes trouble. Some people hear “ungrading” or “I’ve eliminated grades from my class” and jump to the conclusion that this person is lazy and just didn’t want to grade! I mean, it’s right there in the name, right?
Let’s start with an underlying misconception: no grades is not the same as no feedback, and indeed helpful feedback is one of the four pillars of alternative grading. Ungrading, and alternative grading in general, focuses on helping students understand what they’ve learned, and what they need to work on. Feedback is the key mechanism to make that happen.
When an ungrader gets some work from a student, they don’t ignore it. They take a look at the student’s work, think about what evidence of learning it demonstrates, and give helpful feedback in actionable terms.
So, no: Ungraders and alternative graders in general are very not lazy. Giving good feedback is hard, and takes a lot of time. Luckily, giving feedback is also much more enjoyable (and useful!) than assigning a traditional grade. To illustrate the difference in helpfulness and effort, consider these two options: First, X-ing out a sentence and then writing “7/10” at the top of a paper. Second, writing something like this but leaving no mark at the top of the paper:
Excellent use of diagrams, they do a great job of illustrating your steps, especially the last one where you ‘pull out’ the key triangles to focus on them. Be careful with notation though: the last sentence says that lengths are congruent. What kind of objects can be congruent?
(This is actual feedback I recently wrote. There was an arrow pointing at the sentence in question.)
Overall, alternative graders are some of the hardest working instructors I’ve met. They’ve made the intentional – and voluntary – choice to go through the effort of changing their grading methods in a major way. They’re taking on this extra work because they want something better for their students and themselves. As a whole, alternative graders show a huge amount of care for their students, and that’s exhibited in many ways big and small. They are most certainly not lazy.
Myth: Alternative graders have low standards because they let students try problems over and over
This is a misunderstanding of both how humans learn, as well as the fourth pillar of alternative grading: reassessments without penalty. Sometimes people hear about this pillar and jump to the conclusion that alternative graders let students take the same tests over and over until they get them right (I’ve heard that exact phrase – take the same tests over and over – used in this context). So if we’re just letting students do the same thing over and over, of course they’re going to get credit eventually, but that doesn’t mean they’ve learned anything, right?
I don’t know of anybody who does what this literally says, offering reattempts by handing students the exact same assessment and having them do it again. “New attempts” are just one form of reassessment in alternative grading, and when they’re used they involve students completing wholly new questions covering the same standards or topics. A big benefit of a “new attempt” reassessment is that it asks students to solve a new problem from scratch, showing that they can fully address the relevant standards. That’s exactly the opposite of what this myth claims.
Maybe people who say this – and again, I have literally heard “take the same tests over and over” – are making an extreme statement for the sake of argument. It’s more likely (I hope) that their real concern is that new attempts could be too similar to previous attempts. Then students could learn to parrot back “what the instructor wants” without actually learning the material. This is a valid concern, one concisely expressed by Joshua Bowman: “a second attempt at a conceptual question becomes a procedural question.” Although in this context, the same problem arises in traditional grading, when instructors do things like give a cumulative final exam or offer a “practice test” as part of a review day.
Alternative graders are, in my experience, highly attuned to this issue. The number of sessions I’ve seen at the Grading Conference about how to write good new attempts attests to both the challenge and the dedication of alternative graders in addressing the issue. But again – in my experience – when students are asked to revise and reflect on their previous work before making a new attempt, the result is real learning. It’s engagement in a feedback loop, the core of human learning. If new attempts didn't support real learning, none of us would know how to walk, speak, write, or do any of a thousand everyday things that required multiple attempts.1
Myth: Alternative graders have low standards because their course grades are higher than average
As you can see, “alternative graders have low standards” is a sort of ur-myth2, and people tend to find different things to point at to justify this belief.
The following is true: In some alternatively graded classes, final grade distributions are higher, often with more A’s and B’s than in traditionally graded sections.3 So, doesn’t a higher grade average mean that alternative graders have low standards? Doesn’t it mean a lack of rigor?
I think the heart of this concern is grade inflation. People see more A’s and B’s and jump to the conclusion “that instructor has low standards” rather than thinking “maybe that instructor is doing something that helps students learn better.”
Robert and I have both written about grade inflation and higher grade averages before, so here’s the short argument: “Grade inflation” means higher grades without corresponding increases in learning. But because grades in alternatively-graded classes are directly linked to clear evidence of learning (standards or specifications), any increase in final grades does correspond to actual learning. In addition, because traditional grading penalizes students for engaging in the human process of learning via feedback loops, many traditional final grades are artificially lowered by the averaging process. So it makes sense that alternative grades come out higher, and that isn’t due to inflation.
In addition, alternative grading does hold students to high standards: By requiring students to meet clear standards or specifications, there’s no way to get by with partial credit. A student who has “met expectations” or earned a “satisfactory” mark had to do so by demonstrating full understanding. “Higher standards” is even one of Linda Nilson’s key selling points for Specifications Grading in her book.
So, higher grades are directly linked to greater learning and higher standards. Lower standards are nowhere to be found.
Thanks for coming along for another round of myth-busting. If there’s one takeaway from today’s post, I suppose it’s that people like to jump to conclusions about new ideas. That’s a good reminder that when I encounter a new idea that rubs me the wrong way, or a clickbait headline telling of the latest outrage by [insert group I don’t like here] there’s probably more to it than my surface-level initial understanding. It’s worth taking time to investigate a bit more deeply to see what’s there.
In grad school, for some reason now lost to the depths of time, I decided I wanted to learn to type on a Dvorak keyboard and spent the better part of winter break learning it. Repeated attempts at the exact same problems were in fact the only way I learned at all.
“Ur-myth and Dvorak keyboards making an appearance in the same article.... this blog is off the chain.” – Robert
For example, see Harsy & Hoofnagle 2020, or Chen et al. 2022 (pre-print) However this isn’t universal, and other instructors report unchanged grade averages, or even lower ones. We’ll go into a lot of detail about this in our upcoming book.
When reading about "lazy graders," I immediately thought about how much I dislike the use of online homework paired with auto-grading software (e.g. ALEKS). And I congratulated myself for not being one of "those teachers."
Then I read your last paragraph about not being quick to jump to conclusions.
So now I'm wondering if there is a place for auto-grading software. It certainly does provide quick feedback. The big drawbacks I see are how it can encourage mimicking procedures (as opposed to learning), and how classes can slip into using the software as the only form of feedback.
Do you know of any instructors in the alternative grading community that use auto-grading software? Do you have any principles regarding its use?
Thanks for the great post!
This comment is not from a myth believer as I am using semi-mastery-grading this semester. I want to have a healthy discussion in a polarized world.
"Although in this context, the same problem arises in traditional grading, when instructors give a cumulative final exam or offer a "practice test" as part of a review day."
Practice tests do not need to have the same problems as those given in the exam. Cumulative final exams should not either.
Final exams have questions that mostly synergize various topics, and there is a high interleaving need on the part of students as the question is not identified as a particular standard.
SBG misses the component of a student having to identify which topic the question belongs to, especially when final exams or cumulative projects are not given.
Regarding the issue with giving the same problems, it is an issue if say, just limits of integration are changed for an integral. Some SBG advocates let the question be taken home if they still fail.
There is an equity issue as well - who can take it more times and afford to meet out of class - the person taking a light load of courses and not working outside of school.
SBG success depends on some guardrails and flexibility like anything else in life! As an example, many of us have witnessed how the flipped learning purity has changed over the years. Most include some minilectures, acknowledge that some topics need more addressing than others by the instructor, and that flipped learning resources can be made available for non-flipped sections of the class to witness the narrowing of the gap between the cognitive performance of students.