Planning for grading for growth: Fleshing out the feedback
Prototyping the most important parts of your alternative grading system
This week we’re continuing the multi-part series we’ve been doing on how to build an alternative grading system. However, I want to start by saying yeah, I know that the semester has started for most people and it’s way too late to be building a new grading system for your Fall courses. I definitely could have timed this better. However, this series should still be useful. If you weren’t quite ready to dive into alternative grading this fall, then use this series to think about next semester. Or, if you are doing alternative grading this fall, use these posts to compare our notes with yours — and if you have improvements, suggestions, or different points of view, then please, share those in the comments!
In Part 1, we investigated the who, what, when, where, and why of your course. In Part 2, we set up the module structure and standards. In Part 3, we examined the primary focus of the course and wrote out a general plan for assessments. Part 4 went into detail on choosing the marking methods and writing a narrative description of what a “C” and “A” mean in your course. In this installment, we’re going to get around to the core of your grading system: Feedback loops.
The centrality of feedback loops
We mention feedback loops a lot here on this blog, and for good reason: They are the fundamental engine behind all human learning.
There’s a reason why feedback loops make up the second and fourth pillars of the Four Pillars of Alternative Grading. When we learn anything of significance, it’s because of our engagement with a loop: We do something; then we get helpful feedback from a trusted source; then we analyze and reflect on the feedback; then we make mindful changes to our processes. And then we start the loop over with inputs that are informed by the previous iterations.
And eventually, the outputs we produce are “satisfactory” in whatever sense that might mean. So it’s more of an inward-winding spiral than a loop.
While feedback loops are at the heart of everything related to learning, they are almost totally missing from traditional grading practices. Instead, we have one-and-done assessments, usually with no helpful feedback given (a point total, or an “X” through incorrect work, or a two-word interjection scrawled over someone’s work does not count as “feedback”), and almost never any opportunity to redo work that needs it.
So one of the most important things to do when building an alternative grading system is to make sure those feedback loops are in place. We want to be intentional about setting up these feedback loops, so that they are first-class citizens in the course and not something bolted on after the fact. Let’s walk through how to build them.
Outlining the core feedback loops
A feedback loop needs two things: A plan for giving feedback, and a mechanism for reassessments. In the previous steps in this build process, you outlined a list of assessments to give in your course that fit with its background and primary focus. For each general type of assessment you’ve identified, do two things:
First, identify the means of giving feedback: Will you be writing on paper? Leaving comments on a PDF or via your LMS’s grading software? Having individual meetings with students?
Next, identify the form of reassessment. Two fundamental options are: new attempts, or revision of a previous attempt. New attempts are often paired with skills and standards. They could be done in class, in office hours, online, or many other methods. Revisions, on the other hand, make sense for processes and specifications. Think about what makes the most sense for your situation, and what you want students to get out of the reassessment process.
Whatever you decide, keep in mind that every assessment should include a feedback loop unless there is a compelling reason not to have one. This means that for nearly every assessment you give, you should plan on giving not just a mark (if you plan on giving one at all) but also Helpful Feedback, and students should be allowed to Reattempt Without Penalty.
There are some cases where a full feedback loop isn’t feasible, or where it will need to look a bit different. These include:
Pre-class work. If students complete work as preparation for a class session or activity, redoing the work once the event is over might not make sense. You can still give feedback on the work. But instead of a reattempt, consider making the assessment more mistake-tolerant, for example by grading it on completeness and effort only.
End-of-term projects or essays. Items that occur at the very end of an academic term are difficult if not impossible to reattempt. For items like projects and papers, you can have students turn in drafts earlier in the term, allowing you to “pre-assess”.
Final exams. Final exams suffer even more from end-of-term limits. Some of the case studies we have written about here give some hints on how to deal with final exams, for example Joshua Bowman’s calculus class in which the final exam is a recertification of “core” standards, or Hubert Muchalski’s organic chemistry class which includes a common final as part of a specs grading setup.
As you think through the feedback loops for your assessments, also keep in mind that while we want students to have Reattempts Without Penalty, this doesn’t mean there should be no limits. It’s OK to place reasonable restrictions on how reassessments will work. (Imagine yourself in the last week of the semester. How much are you willing to do when the reassessments are piled up?) A token system is a common way to regulate reassessments; our case study on Kay C Dee’s biomedical engineering course gives some details for how it works in real life.
Don’t skip this: Being overwhelmed by reassessments is one of the places where new alternative graders have the most trouble.
Can we build it? (Yes we can)
Once you’ve outlined the feedback loops in your system, the next step is to zoom in on one assessment by building a prototype and practicing the feedback loop process with it. A “prototype” of an assessment is not a final or perfect version, but something that has just enough features to be usable by real people.
If you’re converting a class that you’ve previously taught, you likely already have assignments from the last time you taught the course. We encourage you to reimagine one of those as a new assessment, and follow the steps we’ve listed here to generate it anew. This helps you think about how your assessments align with your new grading system.
Your prototype should include:
A mock-up of the assessment. Ideally this should be in whatever format the final version will have: a PDF, a quiz on your LMS, or a printed sheet of paper.
The specifications or standards used to evaluate student work. Write a full set of specifications or standards in a form you will share with students.
The workflow for how students will turn in their work. For example, will students turn in their work on paper; type and submit it electronically; or scan and submit paper to an LMS or via email? Write this out for yourself and add detailed instructions.
The workflow for how you will grade and give feedback on the work. How and where will you indicate marks? How will you give feedback?
The workflow for how you will return marks (if any) and feedback, and how students will access it. There can be significant technical issues here. For example, if you are marking work as “Satisfactory/Not yet” then how will you get your LMS to display those marks, rather than points? Do you need to add a list of standards to the assignment?
As you build this prototype, use the narrative for “A” and “C” grades, where you wrote out what “minimally passing” and “outstanding” work look like, to help yourself and your students understand why the assessment exists and what’s important about it. Once you build the assessment, ask yourself: How does this help determine if a student minimally passes the course? How is it used to determine if a student has done truly outstanding work? Why is this assessment significant in the course? Develop good answers to these questions, because students will probably ask them! And if you’re having trouble answering them, it might be a sign to rethink whether that assessment should be given at all.
Also “rehearse” the feedback loops you outlined above. Think through how the process of reassessments will unfold. If you’re using new attempts, when will they happen and how will you show which problems are new and which are reattempts? If using revisions, how, where, and when will students submit them, and when will you grade them? In all cases, how will you communicate the results?
Finally, “stress test” the standards and specifications. How would you mark and give feedback on submissions with common misconceptions, errors, or omissions large and small? Are there likely “borderline” cases? Does something important get missed, or something unimportant cause trouble? Can you clarify the specifications or standards to address these issues?
We’re taking another blogging break next week for the Labor Day holiday, but we’ll be back again soon with some of the final steps toward building your alternative grading system. Coming up: Creating a system for assigning course grades, and asking the question What could go wrong?
Thanks for reading Grading for Growth! Subscribe for free to receive new posts in your inbox every Monday.
We are borrowing the idea of the prototype from our business and design colleagues and the concept of design sprints, where you literally build out a very basic, but fully functional version of a product in order to get it in front of real customers in order to learn things about it before it goes to market.
Thank you David and Robert for the "How to" series on alternative grading. I've been following and going through all of your exercises and I have a mostly complete syllabus ready to go (classes start next week for us).
For my Abstract Algebra course this Fall, I will be having three types of assignments: Preparing for Class (marked Incomplete/Complete), Weekly Homework (marked "Exceeds", "Meets", "Not Yet", resubmissions possible), and Perfect Proofs (marked "Exceeds", "Meets", "Not Yet", resubmissions possible). This will be the first time I am not having individual exams of any type... It feels strange to let those go.
Final grades depend on the various levels of achievement in each category. I'm happy to provide details if anyone is interested.
May I use your Feedback Loop model to share with teaching professors? Do you have preferences for acknowledging your work?