Three ways I am simplifying my alternative grading this semester
Because the process never ends
Keep it simple is the "Prime Directive" of alternative grading. It's a message that David and I have pushed relentlessly here on the blog and in our book, because many well-intentioned attempts at reforming a grading system end up foundering due to overcomplication: Too many topics, standards not clearly enough stated, wildly optimistic time estimates for reassessment, and so on. Even the best, most student-focused intentions will run aground if the system itself isn't simple enough.
I've failed the simplicity test many times myself. In my origin story, I described my first attempt at specs grading: 68 (!) learning objectives in the class, each of which required multiple layers of assessment and reassessment. One of the things I learned is that simplicity is hard for college faculty: Our default state is to add more to systems whenever possible, and removing or simplifying elements of a system causes a feeling similar to actual physical pain.
And yet, students almost always benefit from simplification1, so as faculty we must always think about ways we can simplify our systems, including grading. In this post I want to share three ways I'm simplifying my grading system. I have an unusual opportunity this year, as I am teaching only one course -- Discrete Structures for Computer Science 1 -- two sections in the fall and two sections in the winter (i.e. now2). This is the most common course in my teaching portfolio, and the last time I wrote about it was a couple of years ago. The Fall offerings of the course turned out to have a number of ways they could be improved, most of which hinged on making things simpler. So here they are.
Just one level of mastery (not two)
In Discrete Structures, there are basic "atomic" skills that students need to master, like being able to setup a mathematical induction proof or find the intersection of two sets. In the past, I'd had 20 of these skills, all clearly listed in the course syllabus. For the Fall course, I cut these down to 16 skills and designated eight of them as "Core" skills. Here is the portion of the Fall 2023 syllabus that shows these.
Students demonstrated their skill with these "Learning Targets" primarily through weekly quizzes called Checkpoints. Here is one of those. Checkpoints are cumulative, with previously-appearing Learning Targets appearing on multiple future Checkpoints to allow reattempts without penalty. I kept track of the number of times students had a successful demonstration of skill on each target, with "success" being defined by the "Success criteria" printed below each problem. One successful demonstration of skill put the student at "Level 1" on that skill; two successful demonstrations, on separate Checkpoints, put the student at "Level 2" and we counted this as "mastery", with no further demonstrations needed.
My intent was to say that two successful attempts on a skill means "mastery", but having only one successful attempt should also earn some "credit". For example, for a "C" in the class I did not insist that students master (via two successful demonstrations) a certain number of Learning Targets -- just that they "master" all eight of the Core targets and get to "Level 1" (one successful attempt) on any two others. That is, if you master the Core and then show some skill on a couple of non-Core skills, that's enough for a "C".
This is how it looked in the grade table for the course:
You can probably see where the problems came in. Students got confused by the two levels, particularly the way they "nest". Some didn't grasp that to get to Level 2, you have to get to Level 1 first. Some got the idea that Level 1 is a higher level than Level 2 (like how first place in a competition is better than second place). A few interpreted the table as meaning you have to get to Level 2 on core targets, then start over, and get to Level 1 again on those. And this is just a sample of the misconceptions students had of how this works. And it's my fault, not theirs.
The way I handled this for my current offering of the course is to do away with the concepts of Level 1 and Level 2, and only count "mastery" in the course grade. That is, students either master a skill by providing two successful demonstrations of it, or they don't. Here is what the new grade table looks like:
This is a pretty clear simplification. But it involves removing things from my course and its systems, and this doesn't come for free: We lose certain aspects of the course experience in the process. The question is whether my students and I gain more than we lose. What we have lost by giving up the two levels of mastery, is some nuance in the picture of student growth that Level 1 provided; it also removes the closest thing to "partial credit" that students have, since one successful demonstration of skill doesn't "count". This loss is real. But I think what we gain makes up for it: We gain a single rule for thinking about mastery which helps students focus their efforts and leads to a clearer understanding of the system and less cognitive load overall, not to mention higher academic standards.
Keep that idea in mind: Simplifying a grading system comes at a cost, but that cost might pay for itself later. And although some single piece of our grading systems might provide some benefit for students, it doesn't mean we should include it, because the sum total of the system might break down because of all those parts.
Removing core skills
If you compared the two grading tables closely, you might notice that there were eight Core skills in Fall 2023 but only six now. That was another step I took: Removing two skills from the Core, that is, the set of skills that every student has to master in order to receive a grade of "C" (which is "passing" in their curriculum).
The problem with requiring mastery (two successful demonstrations of skill on Checkpoints) for eight Core skills was that many students found it hard to do so on all eight, despite lots of reattempt opportunities. Several students mastered six or seven of those eight skills, and made it to "Level 1" on the remaining one or two. If I followed my syllabus I would have to give those students a C- which meant repeating the course and delaying their secondary admission to the Computer Science program. In the end, I ended up doing something similar to ungrading for these students: I looked at the totality of their work, consulted with students where I truly couldn't tell if they had done "C" work, and finally used the whole picture to assign a grade, which was usually "C". And of course, I took notes on what I did so I applied the same standard to every student in a similar position.
I think students got a grade that accurately reflected their learning and growth. But it was time-consuming, kludgy, stressful, and unsatisfying. And it all stemmed from insisting that students master eight core skills and then blinking when they didn't.
My solution was to simplify by making the Core smaller and then insisting on mastery which as I mentioned earlier was itself simplified by having only one level, not two. My argument to students is that this is a small number (6) of truly essential skills each of which is simple. So I insist on mastery of that core -- no exceptions.
What we give up by doing this is that two skills that were formerly considered "core" were relegated to what are called "supplemental" skills, i.e. not in the core. Those two skills are performing arithmetic in binary, and using basic rules of recursion. These are important skills for computer science! But in preparing for the Winter, I asked myself: Are all of the core really core? Or are there some topics that I think, or someone else thinks, are essential but they actually aren't? And I landed on those two. If I receive pushback from my math or CS colleagues about this, I'll let you know, but I doubt it will happen.
What we gain from this, is a core that is smaller and therefore easier to master; students have more choice on the supplemental skills that they master; and again it simplifies the overall process to make it easier to grasp.
This illustrates another important concept in simplifying systems: Often you can simplify on the front end by removing inputs that don't need to be there. Topics that you might enjoy but which aren't really essential3; assessments that provide some information on student learning but not a level commensurate with the time cost of giving them; and so on. As management expert Peter Drucker once said, there's nothing so useless as doing efficiently that which should not be done at all.
No deadlines and no tokens
Students also complete "challenge problems" that are applications and extensions of basic skills. In the Fall, each problem had an "initial deadline". Students were allowed to submit revisions of these problems in the future as long as they submitted a complete good-faith first draft before that initial deadline. But if they didn't, then they were not allowed to submit anything on that problem at all. The idea was that if you plan on submitting any work at all on a problem, you need to submit something before the initial deadline and not as a hail-mary at the end of the semester. Those deadlines could be extended by 48 hours by using a token (more on those here).
I was trying to be nice: I knew that deadlines have a lot of value as "commitment devices" and so I put initial deadlines on these that could be moved just by filling out a form to spend the token; and afterwards, you can take your time revising.
But things got quickly out of hand. Having deadlines required having rules about deadlines, which added several layers of legalism to my syllabus and ended up with students running afoul of the rules. Some students would "meet the deadline" by submitting work that was partial or obviously dashed out minutes before the deadline with no real effort expended, just so they could "revise" it later, which led to me having to disqualify student work for not being "good faith efforts" and adding even more rules to the syllabus about what exactly is a “good faith effort”. There was also a lot of overhead with tracking tokens on Blackboard, tracking who submitted what and when, and so on. And, students were confused about tokens; some spent their tokens after the deadline had passed, some spent multiple tokens hoping for 96+ hour extensions, some spent them trying to extend deadlines on other items where deadlines could not be extended. My syllabus was a rat's nest of rules that caused the very confusion and stress it was intended to relieve.
The way I've simplified on this point is to go back to a much earlier policy I had, where items have suggested deadlines that are not enforced except for one big deadline at the end past which no work is accepted. And with the removal of deadlines, there was no need for tokens and so I took those out of the syllabus too.
Again, we give some things up this way. Without the hard deadlines, students don't have the built-in commitment devices I provide for them -- this is their responsibility. And without tokens, the rules that do appear in the syllabus are what they are, and there's no systematic way to bend those. But also again, we gain a lot: Every sentence in the syllabus about late work, deadlines, tokens, and all that has been removed. There's no need for all those rules. And this provides, like the other steps I've mentioned here, a clearer overall picture of what students should be doing in the course, and a clearer idea of the nature of the (few) actual deadlines we have. I think the benefit is worth the cost. (And if a student really needs me to bend a rule, they can just talk to me.)
We're only four weeks into the new semester, so the jury is still out on whether I was right about these steps. But the early signs are promising: Students seem much more at ease. They have far fewer questions about how the system works; in fact they don't really talk about my systems at all, which to me is a sign of a successful system. I chalk some of this up to spending the first week of class doing onboarding activities, which I wrote about here. But the system into which you are onboarding students has to be friendly and simplified first before those activities get traction.
What are some ways you are simplifying your own approaches now, or ways that you might do so?
Up to a point. Oversimplification is real. Basing a student's entire grade on one final exam, or an ungrading setup where students simply select their grade from a menu without a corresponding portfolio of work or reflection on why they should receive it, is very simple! But rife with the possibility of false positive and false negatives.
Reminder that in Michigan, "Winter" semester is the climate-accurate name for what other people call "Spring".
You can include non-essential topics if you want, but remember also that you don’t have to assess students on every single thing in the course. Non-essentials can be included as extension assignments, or optional opportunities to demonstrate skill, or just for fun — no quiz required.
I'd love to hear how you feel it went, now that the semester is over. Did removing all deadlines result in a flood of work submitted at the end of the semester?
Have you already written somewhere (and I missed it) about how you track the completion of a learning target? especially when your later checkpoints track several targets?