This week we welcome Prof. Stephanie Kratz as a guest author. Prof. Kratz is Distinguished Professor of English at Heartland Community College in Normal, Illinois where she teaches composition and literature. She has an interest in faculty development and has facilitated a webinar, conference presentation, and faculty academy on alternative grading. She is currently facilitating the Ungrading Learning Community, Part II, which is being funded by a college grant.
In a recent meeting of my college’s Ungrading Learning Community, we were discussing how to design effective alternative grading practices. After reading about the many assessment methods that fall under the alternative grading umbrella (standards-based, specifications, labor-based, self-grading strategies), we were breaking down the benefits and limitations of each approach for various contexts and disciplines. As workshop leader, I encouraged faculty to look over their options and choose one place to start as they plan their projects for a future course. “Don’t bite off more than you can chew,” I told them. “Start small” is a teaching strategy that has worked for me over time.
Like many faculty members who are about to start alternative grading practices, my colleagues had many questions. “If I start by alternative grading only one type of assignment, what’s the motivation for students to put in the work?” one business professor asked. “Yes, how does this alternative grading strategy fit into my graded course?” one biology professor contributed. “I’m concerned about the logistics of this approach as it applies to our certification exams,” added a nursing professor.
My colleagues had pointed out practical challenges of adopting alternative grading. How to make the transition? How to tell whether the approach is a good fit? As others have discussed on this blog, there are many considerations. Beyond the theoretical understanding about educational benefits, adopting an alternative grading strategy requires answering a lot of questions. Because many of us are new to alternative forms of grading, I propose using the same feedback loop described on this blog: do something, get feedback, think about the feedback, and make changes based on the feedback. My journey with alternative grading is one illustration of this feedback loop complete with trying new things, making mistakes, and listening to feedback.
I first decided to experiment with alternative grading practices around 2018. I had grown frustrated with trying to fit student work into five levels of letter-grade boxes. My rubrics had grown longer and more complex as I tried to cover all possible feedback that would apply to everyone. This is when I began reading about alternative grading in earnest. Social media, newsletters, and podcasts featuring scholars like Jesse Stommel, Susan Blum, and David Buck became my coaches. I read about many practices that sounded like they would simplify the grading process.
I decided to start small by adopting a Complete/Incomplete mark for one formative assessment in a Composition 2 course. The assignment was a source credibility analysis; students were choosing sources for a research project. On this assignment, “Complete” was supposed to convey to students that they were asking important questions about their possible source. For instance, is the author biased? Is the information recent? Can you find the same facts in other sources? My intention in using “Complete/Incomplete” was to indicate whether the students were thinking critically about sources.
As I began to assess each student’s work as Complete/Incomplete and provide written feedback, however, I found the two-level mark limiting. Although I had a definition of “Complete” in my head, I did not share it with students beyond a vague description that “Complete” meant student work that included all assignment requirements. But the mere presence of a required component was not equal to the quality. That is, if students mentioned a source’s author but did not accurately identify bias, what mark should I use? Technically, the work was “Complete” because they said they considered the author’s credibility.
And yet “Complete” did not provide feedback about how well the author’s bias was analyzed. In short, my definition of “Complete” was unclear, so the alternative grading method did not help me communicate about student learning. Reflecting on the experience now, I can identify mistakes in my design. For example, my decision to use two levels to assess the entire submission was an oversimplification. There were too many criteria to assess. A better approach may have been to assign “Complete/Incomplete” to each analytical criterion (bias, recency, accuracy) instead of one “Complete/Incomplete” for the entire assignment. In hindsight, I know reality didn’t match my goal.
Although my first attempt at alternative grading did not go as I had planned, I learned something about the way I was using grades to communicate. With five letter-grade levels or 100 percentage points, I had been able to soften the feedback for work that showed promise but did not demonstrate mastery. I had wiggle room to reassure students that the work was “close” to passing without having to mark it as “failing.” But by marking a complex process like analyzing source credibility with one two-level mark seemed even harsher a message. Students had little feedback for me about my approach, probably because they barely noticed the lack of a grade in the midst of an otherwise-graded course. However, I was determined to try again, this time using a different strategy. Over the next few years, I slowly added more alternative grading to my courses. In 2019, I first used assignment rubric with no grades, and in 2022, I launched a course grounded fully in alternative grading.
Over time, I continued to refine my approach. More reading led me to the “EMRF” strategy of multi-level feedback1. I had a conversation with a colleague who used something similar: a 3-2-1 Feedback scale. “3” means the work is exceptional, “2” means the work demonstrates an acceptable level of learning, and “1” means that revisions are needed to be considered passing. “0” is reserved for missing work. I liked this direct feedback that, I thought, would help students see their strengths and weaknesses, so I adopted it my composition course for all formative assessments. Accompanied with a brief statement on the syllabus that “these assignments will not receive grades but instead detailed feedback,” I felt I was giving students more context. The formative assessments that were marked with 3-2-1 were low-stakes assignments meant to help prepare students for the longer, traditionally-graded assignments. I hoped that this new approach would encourage students to look past the numbers and focus on learning.
My 3-2-1 mark was always accompanied by written feedback about strengths and suggestions for improvement. When students had questions, I encouraged them to speak with me, and many did just that. Student responses to 3-2-1 Feedback were encouraging and challenging. Some said things like, “This system gives me less anxiety when submitting assignments because I know that if I get something wrong or miss a point, I’ll have the chance to go back and correct myself.” Others expressed some anxiety about 3-2-1: “The philosophy of grading is new to me, and when I started this class it was honestly really scary.” Many students struggled with the fact that three, two, and one are still numbers. It takes a lot of deconditioning to communicate to students that ⅔ does not automatically equal 67%. Comments like these motivated me to create explicit statements that described my 3-2-1 Feedback and why I was using it. I also reflected on what else I could do to help students understand my reasoning.
My next addition meant to highlight learning was to institute reattempts without penalty. These optional reattempts helped students focus on my written comments and have helped immensely. One student commented, “In this class, I am given the grace to mess up and still be able to correct my work. This led to me trusting myself and my work more and to me spending less time overthinking what I think my professor wants to hear.” Students are more in tune with their progress towards the course learning outcomes. While the theory behind reattempts makes sense to me, the logistics have been challenging. At first, I allowed reattempts on any assignment at any time, and I found myself buried under a metaphorical pile of resubmitted work near the end of the semester. Students are now limited to three reattempts per semester, in a limited time frame (any reattempts from weeks 1-4, for example must be submitted by week 5). I also require a written explanation with each reattempt about what has been changed and why. I just instituted the reattempt limitations this semester for the first time. So far it’s been going well, but I may revise this approach again once I reflect on the whole semester.
One more addition to my alternative grading efforts has been to adopt two reflections (one at midterm and one at the end of the semester) in which students identify the grade they have earned. In these reflections, students answer a series of questions about their effort and performance:
About how many hours per week do you estimate spending on this course?
Did you complete all the discussion boards and assignments?
Did you read and view all the texts and videos that present course content?
Did you resubmit any of your work to make it stronger?
What have you learned? What challenges and questions remain for you?
Following these questions, students describe how their work has demonstrated achievement of the course learning outcomes and use evidence from their work to support their ideas. These student reflections are crucial to my understanding of what students have learned and to my confidence that the alternative grading is working.
My students are accomplishing just as much in my course now as before. In an end-of-semester reflection, one student said, “This is the least stressful class I’ve had all semester and I feel like I still learned just as much as the others.” Students are experiencing self-directed learning. Another student wrote in a midterm reflection, “I feel as though it’s giving me a chance to reflect on my work and improve on it rather than getting slapped with a grade and then forgetting about the assignment.” I’m hopeful that students’ increased awareness of their learning will serve them even after my course is done. My favorite student comment so far has been: “I feel like I am a part of my own education in this class, and I am loving that feeling.”
The struggle to implement alternative grading is real and makes for a wild ride. What have I learned?
Local support is crucial. Uncharted territory can be scary, and unexpected problems will confront you. Colleagues who work on the ground and know your institution are priceless. Having thoughtful colleagues with whom you can discuss and puzzle through the challenges is an invaluable resource. Start an alternative grading or teaching work group, join an existing group2, or simply knock on your neighbor’s door and ask to bend their ear. You can navigate the alternative grading waters together and analyze possible adaptations that fit your course, students, and institution.
Discuss alternative grading with your students. There is no “right” way to use alternative grading; it depends heavily on your situation. Take your time finding the right model. Use the feedback loop described above, and listen carefully to what students are saying about their learning experience. Student feedback helped me see limitations I had not considered. Explicitly talk with your students about the reasoning behind your methods. Raise the issue of what grades mean and how they make students feel. You will learn so much from them.
Be prepared for discomfort yourself as you shift your strategies. I was surprised by how much a teaching experiment involving one minor homework assignment impacted my teaching overall. But somewhere along the way, I realized that if I’m changing the underpinnings of my approach to learning, it can’t stop with one assignment. It is rippling through the way I present content and engage students and colleagues across all my professional pursuits. Recently, in a conversation about my approach to alternative grading, a comment from a colleague hit hard: maybe our job as professors is shifting from content-providers to critical thinking motivators. The content is out there; we just need to light the fire.
In spite of the challenges, I am proud of my goal to make grading more meaningful. My continued reflective practice surrounding alternative grading leads me to think about the barriers. Students resist, and learning management systems block our way. Institutions hold firm to systems that hinder our ability to innovate. We find ourselves in situations much like the ones we set for our students: how to problem solve in the midst of challenges. It is tempting to give up when we get stuck. But isn't that what education is all about - learning and growing? As you continue along your journey, remember that even starting small has its challenges. Let’s celebrate our mistakes as pathways to progress.
All student comments used with permission.
Robert says: We’ve written about this rubric here on the blog before, under the name of the EMRN rubric. You can grab an image of that rubric here. The original format of the “EMRF” rubric was invented by Rodney Stutzman and Kim Race in a 2004 article in Mathematics Teacher magazine.
RT: Like the Alternative Grading Slack workspace!
Thank you for sharing this, Stephanie. As a fellow community college writing prof, I found it very helpful, particularly the way that you talk about the evolution of your practice and take us inside your thought process with that.