Article Review: A Specifications Grading Readiness Assessment
What are factors that can help or hinder my success in using specs grading?
Many articles describe how to implement alternative grading. Lots of them discuss its impacts on students. Some give advice about how to establish trust or “buy-in” with students, and even with colleagues. But few talk about the critical first step: How do you know when you are ready to use alternative grading?
Today, I’ll examine an article that does exactly that:
Streifer, A. C., & Palmer, M. S. (2021). Is Specifications Grading Right for Me?: A Readiness Assessment to Help Instructors Decide. College Teaching, 1-8. https://doi.org/10.1080/87567555.2021.2018396
What is the article about, and who is it for?
A common concern about alternative grading goes something like this: “I’m interested in this alternative grading thing. But I don’t want to risk doing something that could backfire.” Streifer and Palmer take that concern seriously. Their article guides people who are interested in using specifications grading, but haven’t used it before, through a series of reflective prompts that are designed to help readers identify their level of “risk” in using it. They also give advice about the relative importance of those risk factors.
It’s important to know that this article isn’t about using specifications grading. You won’t find any advice about planning a class using specs, or how to design it or implement it day-to-day. Instead, the focus is entirely on you, the instructor, and whether you’re in a situation where you are ready to use specifications grading.
Even specifications grading isn’t really the focus. The types of questions the authors ask, and the issues they raise, will benefit anybody who is interested in any form of alternative grading — including standards-based grading, ungrading, etc.
So in the end, this article will be useful for anybody who is interested in using alternative grading of any type and wants to think through the possible risks.
What does the article say and do?
Let’s begin with the structure of the article. This is a bit confusing, because it comes in two parts. The article cited above sets the stage: It explains why a readiness assessment for using specifications grading is needed. It also describes the authors’ backgrounds as educational developers, their choices in making the assessment, some caveats about how it should be used (which I’ll address later), and some details about the prompts they use. But the actual readiness assessment – the thing containing the reflective prompts, questions, and advice – is a separate four page document, found in the article’s supplemental materials.1
I’ll mostly focus on the readiness assessment itself, informed by the article. The assessment consists of four pages of questions, prompts, and (occasionally) advice, with each page focused on a specific theme. There’s room for you to write your responses, divided into high, medium, and low risk categories. The authors define a high risk factor as one “likely to impede the successful implementation of specifications grading,” although I would suggest it could be more useful to think of these as prone to highly variable or unpredictable outcomes.
Pages 1 and 2 focus on institutional culture and the support – or lack thereof – that instructors might get when using alternative grading. Page 1 asks readers to reflect broadly on three prompts related to their institution, its culture and expectations, and especially how the institution values and evaluates teaching. The goal of this part is for you to generate a list of issues to consider and self-categorize them into risk categories. The authors don’t offer any particular advice here.
Page 2 repeats these general prompts, but now adds detailed examples of issues that could arise, organized by risk category. An example of a high-risk issue is: “SETs [Student Evaluations of Teaching] are only source of data to evaluate or provide feedback on instructors’ teaching.” This is high risk because an unfamiliar grading system might provoke negative student responses in SETs, and an evaluator might latch on to it, causing trouble for the instructor. On the other hand, institutions that have more nuanced and flexible sources of data for teaching evaluations are listed in the “low risk” category. This order – reflect on page 1, then consider specifics on page 2 – can be helpful since it ensures you’ve considered your specific context, while also pointing out critical issues that everyone should consider. But, I did find it a bit odd to fill out one page, and then suddenly face the exact same questions on the next page.
Page 3 asks the reader to reflect on how their personal and professional identities might intersect and affect their choice to use alternative grading, and its outcomes. Here the authors (intentionally) offer no advice at all. The prompts focus mostly on your willingness and motivation to use alternative grading, which you choose to sort into risk categories. The authors do give a couple of brief case studies in the article, again without advice, and for the most part you’re on your own.
Page 4 has you identify day-to-day factors related to students, your own background with the class and teaching as a whole, and external course or curricular requirements. This is where the authors give the most direct and (in my opinion) useful advice. Student factors include things like the size of your classes and student background. Instructor factors include your level of teaching experience, content expertise, and status or rank. Course and curricular factors have to do with external requirements, such as required grade percentages or courses that are coordinated or part of a sequence. (You might recognize a lot of these from “Step Zero” in our own alternative grading workbook.)
The authors divide these factors into high, moderate, and low-risk categories and ask you to identify which ones your class falls into. Some of these divisions are, by necessity, pretty arbitrary. For example, a class of 49 students is “medium risk” but 50 is “high risk”, or an instructor who has taught for 5 (but not 6) years is at “moderate risk”. The authors give some advice about interpreting these categories on the back of page 4. I found it helpful to think about these factors holistically — do I have a lot of high-risk factors? — and not focus too literally on individual details.
Streifer and Palmer point out that “high risk” doesn’t necessarily mean don’t go there! Rather, high risk factors require careful thought and planning, and the identity-related considerations from page 3 can affect each individual’s level of risk tolerance and willingness to swim upstream. I agree with this: it’s not necessary to throw in the towel if you identify a high risk factor in your setting. But it is important to think carefully about your risk tolerance, and make plans based on it.
One risk factor that the authors highlight is institutional concerns about “grade inflation”. They make an excellent argument that alternative grading systems can lead not to inflation but rather “grade elevation”. That is, students’ grades are elevated by greater learning. That’s a fantastic phrase that I’ll be using from now on.
The end of the readiness assessment is a bit abrupt: After working through these four pages, you’re left on your own to draw a final conclusion. The assessment won’t tell you whether you are ready to use specs or not. There is no numerical scale, no lookup table, not even a general note about looking for a balance of risk factors. The feel that only you can make the final decision to start using alternative grading, and that the assessment’s role is to help you see all of the factors involved in that choice. But it can come as a surprise to find that, in an article focused on “am I ready to use specifications grading?”, there’s no final advice on that very question.
What do I think about the readiness assessment?
The article fills an important niche: Helping instructors who are new to alternative grading (of any kind) think through the many factors that might affect their decision to use it or not. The biggest strength of the assessment is that it helps ensure you’ve thought through your motivations and circumstances carefully.
I do wish that the authors had more explicitly viewed “readiness” as a spectrum, and given some more direct advice for those at different points along the spectrum. This is one place where the sole focus on specifications grading puts on some blinders. There’s not just one right way to use alternative grading, specifications included, and readers might be ready to implement some types of alternative grading but not others. For example, large class size and required grade percentages are both seen as high risk factors, because they don’t necessarily fit well with specifications grading. I’d frame these as a perfect opportunity to use an approach tailored to that kind of situation, like Standards-Based Testing.
That said, it’s entirely possible that your answer to “am I ready to use specifications grading” (or standards-based grading, or ungrading, or…) is: not yet. Even if you’re incredibly hyped about the great ideas in alternative grading, it might not be right for you, right now, and the assessment can help you figure that out. As is the spirit of alternative grading, that’s not a final judgment, and the assessment can help you figure out when you are ready.
The article ends with what I think is some important introspection. The authors both work as educational developers in a Center for Teaching Excellence, and recommend pairing the readiness assessment with guidance from an educational developer (I think they see this as where you could get more direct advice). But they point out that this isn’t always possible: “having been educated within the same traditional evaluation culture that specifications grading seeks to challenge, many (perhaps even most) educational developers lack the necessary first-hand experience with alternative evaluation practices to support instructors who wish to experiment in this realm.”
This is a real issue, and it points at the importance of having a broad community of practice to support new alternative graders.2 One of the main reasons that this readiness assessment is so valuable is that it provides a source of advice and knowledge that many instructors don't have at their home institutions.
But this also hints at a much-overlooked benefit of using progressive pedagogy: Providing positive models for future teachers. Experiencing alternative grading as a student both exposes a student to new ideas about grading, and can give them confidence that it is feasible to use in their own future classrooms. This is true for future teachers at any level, K-16 and beyond. I can say from personal experience that being exposed to alternative grading as an undergraduate made me aware of the idea in the first place, and helped me be more confident when implementing it in my own classes. Without that exposure, you don’t know what you don’t know.3
I’m guessing those who have already thought carefully about why – or if – they want to use alternative grading will not find much new in this readiness assessment. However, if you are just encountering alternative grading, are concerned that you’re missing some important considerations, or are an educational developer helping others, this article provides a valuable resource to help you think through the situation.
The assessment lists a third author, D. Bach, in addition to the two authors of the article.
The Alternative Grading Slack is one such place!
If you want to see the kinds of things I was experiencing, check this out: Olson, D. A. (1996). On the abolition of grading. PRIMUS, 6(4), 289-307. https://doi.org/10.1080/10511979608965833 . “Ungrading” has existed for a lot longer than you might think.
Thank you for sharing this balanced critique of Adriana Streifer's and my work. Like you, we wish we would have broadened it beyond spec grading since the ideas do apply regardless of the alternative grading scheme one adopts. Providing entry points for those not quite ready for full-blown spec grading would have also been helpful. As you know, one doesn't need to go all in to make their grading practices more equitable and learning-focused. Too bad you weren't one of the original reviewers of our manuscript. :-)