Drawing on past experiences
What does “teaching how we’re taught” mean for alternative grading?

When I was an undergraduate1, one particular professor made a big impression on me. Everything about his teaching was different from anything I’d experienced before. His classes were radically student-centered, his grading was weird, and everything he did seemed rooted in treating students as real humans rather than names on a page or numbers to process.
In retrospect, I can put names to everything he was doing. The first class I had with this instructor – in my very first semester of college – used Inquiry-Based Learning. This was active learning, but also something beyond that. We – the students! – drove the course topics. We presented our ideas, work, and proofs to the rest of the class. We argued with each other, picked apart details, and kept going until we all understood. The instructor sat in the back of the room (what?!) and only interjected to give a summary or maybe get us back on track. One day he “forgot” to come to class for 20 minutes, and the rest of us just kept it going.
This freshman class was about something called group theory, a traditionally upper-level topic. Although I’ve taken more formal group theory classes since then (including several graduate-level ones), my deepest understanding of group theory is still rooted in the ideas I learned from this first class.
The grading, well, that was something else. Each class I took with the instructor had something unusual about its grading. Nowadays we might call it “ungrading” for lack of a better term2. The main characteristic was that the grading faded into the background. To the extent that there were grades at all, he used a “backwards” scale with marks like F - Fantastic, D - Delightful, all the way through A - Awful. He once told me that this was deliberately set up to try to break students’ expectations about how grades worked and what they meant. Final grades often involved submitting a portfolio. The portfolio included reflections on learning, a chance to make an argument for a specific grade, and the option to revise and resubmit previous work.
Keep in mind that all of this was happening 25 years ago, in a math department at a small university that was working overtime to be research-focused.
Years later, as a new faculty member designing my own courses, I was inspired by these experiences. I used active learning from the start, and have leaned heavily into Inquiry-Based Learning since then. My grading systems started out unusual and kept getting stranger, eventually leading to, well, this blog and my current professional life.
Because I’d experienced all of these innovative teaching and assessment techniques as a student – and because I’d had a positive experience with them – I was willing to try them as an instructor. My experiences gave me confidence that these approaches could work, that they were worthwhile, and that I could implement them successfully. Even 20+ years down the road, those experiences are still affecting how I teach. Last year, when I was teaching a brand new course for the first time, I called up my past instructor (who had taught the same course to me as an undergrad) and talked with him in detail about the “mastery exam” setup that he’d used. I implemented it myself, a whole new grading experience for me!
Reflecting on these experiences has made me think about how this might be reflected in my students as well. I teach a lot of students who become teachers, either in K-12 or postsecondary institutions, and I can see this same cycle repeating. I’ve had long and detailed conversations with past students about my instructional choices and how I got there, and some of them are using similar approaches in their own classes.
Perhaps it’s not surprising that teachers sometimes teach how they were taught – in fact, that phrase is almost a cliche. But in my case, it’s true in a positive way: My teaching is better for students because of how I was taught. And I can see the same happening with my students.
Let’s look at some research
This post was inspired by a recent paper about exactly this topic: How students’ past experiences affect their future teaching choices. Let’s take a look:
Kraft, A. R., Atieh, E. L., Shi, L., & Stains, M. (2024). Prior experiences as students and instructors play a critical role in instructors’ decision to adopt evidence-based instructional practices. International Journal of STEM Education, 11(1), 18. https://doi.org/10.1186/s40594-024-00478-3
The authors are faculty in chemistry departments, including some well-known researchers in the chemistry education world. While the paper doesn’t directly discuss alternative grading, I think there is a lot we can learn from it, in particular an often-overlooked benefit of alternative grading.
This article is centered on a graduate-level pedagogy course for students in STEM disciplines (including some graduate TAs and other future teachers). The students in this class had little or no previous teaching experience, but were interested enough that they took an elective class on pedagogy.
As part of the class, students were exposed to a variety of evidence-based instructional practices (EBIPs), including Peer Instruction and the 5E model. Some had experienced these EBIPs in previous classes, and each EBIP was used in class during a lesson, which ensured that each student experienced the EBIP from the student perspective. Later, they practiced with it as instructors: They used Peer Instruction in a practice lesson, and then gave another practice lesson where they were free to choose any EBIPs they felt appropriate.
The paper analyzes these students’ motivations and decisions about when and why they chose to use these instructional practices in their practice lessons and in other teaching. The analysis is based on many things, including teaching philosophy statements, interviews, and observations of their lessons.
The main takeaway is that a significant factor that influenced whether students adopted a specific EBIP was their previous experiences with it, especially as a student. As the authors write, “Instructors are not blank slates”. More specifically, they identify a connection between the students’ decisions to adopt an instructional practice and the “emotional valence (i.e. positive or negative feelings)” from their prior encounters with it. Students who had a positive prior experience with an EBIP were much more likely to use it in their own lessons — and moreover, they were more likely to use such an EBIP even when “they generally lacked confidence in implementing these practices”.
Importantly, the connection works both positively and negatively:
Many have asserted that instructors “teach how they were taught.” Our findings lend partial support to this statement, yet notably we found that this was not always the case as several of our participants used their experiences as students to do just the opposite.
Some of the interview subjects avoided certain EBIPs because of prior negative experiences. But in addition, “our results showed how this prior experience can in fact positively align with calls for educational reform.”
The authors also found that the students’ teaching values – the underlying philosophy that they based their teaching decisions on – were heavily influenced by past experiences:
Often, the teaching values that future instructors described were shaped by their experiences as learners themselves. [...] Participants appeared to reflect on their prior learning experiences to evaluate what was productive or inhibitive to their learning.
All of this was somewhat surprising to the authors. They had expected to find that the students would have a variety of reasons for choosing to use (or not use) EBIPs. But instead, students were quite consistent in basing their teaching decisions on their prior experiences with exactly those practices.
Going beyond our classrooms
Perhaps these results don’t strike you as surprising (they certainly aligned with my experience). Nonetheless, it’s nice to have some data to confirm what I already felt.
This article highlights something from my personal experience that I think we don’t talk about enough: When we teach with innovative practices – including alternative grading – we are providing experiences for future teachers to base their future teaching decisions on. The authors summarize this nicely:
…this study illustrates the long-term impact that the current implementation of student-centered teaching practices in STEM courses can have in a decade and beyond. Some of the current students in these courses will become instructors within 10–15 years. This study along with other studies in the literature indicate that these experiences have a strong potential to drive these instructors’ decision to implement identical or similar instructional practices in their courses and to promote teaching values aligned with student-centered practices.
That’s not a small thing. As my own experience illustrated, a student who encounters alternative grading learns, if nothing else, that there are other options besides traditional grading. A positive experience with alternative grading makes it more likely that our students will use it themselves in the future, if they have the chance.
By using alternative grading, we move the needle: We give students confidence and encouragement to use alternative grading in their own future classrooms. That helps spread alternative grading for years in the future and gets us closer to making “traditional” grading that much less standard.
But wait, there’s more!
This article pointed me to a number of other interesting articles on chemistry education co-authored by Marilyne Stains and others. Alongside math, chemistry seems to be a particular hotbed of STEM-based alternative grading. I hope to dig into these papers in more detail later. For now, here are some quick citations and summaries:
Wang, Y., Machost, H., Yik, B. J., & Stains, M. (2025). Why chemistry instructors are shifting to specifications grading: perceived benefits and challenges. (Paper) Chem. Educ. Res. Pract., Advance Article. https://doi.org/10.1039/D5RP00035A
A key highlight, keeping in mind that Stains et al. are especially interested in how and why instructors choose educational innovations:
Our results indicate that instructors adopted specifications grading as a means to address their dissatisfaction with traditional grading. The commonly cited relative advantages of specifications grading include a perception that specifications grading increases student learning gains and provides greater flexibility for students. These findings provide insights into the dissemination strategy of innovation, highlighting a need for direct alignment between perceived advantages of pedagogical innovations to instructors’ dissatisfaction and instructors’ expressed real-world needs and aspirations for their classroom.
This is especially interesting since it is entirely about an instructor’s motivation in terms of dissatisfaction. Again, perhaps this isn’t surprising to those of us already using alternative grading. But this is an important source of motivation to remember as we talk with colleagues and others. It also shows an interesting contrast: Presumably many of those who are dissatisfied with traditional grading had to go looking for an alternative. Those who have previous experiences with alternative grading already have a model to look towards, as described in the first paper.
Finally, the article makes it clear that the authors don’t consider alternative grading an “evidence-based instructional practice”, although I hope recent research trends may change that.
One last paper, this one more on the student side:
Yik, B. J., Machost, H., Streifer, A. C., Palmer, M. S., Morkowchuk, L., & Stains, M. (2024). Students’ Perceptions of Specifications Grading: Development and Evaluation of the Perceptions of Grading Schemes (PGS) Instrument: J. Chem. Educ. 2024, 101, 3723−3738. https://doi.org/10.1021/acs.jchemed.4c00698
This one looks like it has a lot of potential for future research, especially by those looking to gain insight into their own alternative grading setups:
... we developed the Perceptions of Grading Schemes (PGS) instrument, which explores students’ perceptions of the implementation of specifications grading as compared to a traditional grading scheme experienced in other STEM courses.
These perceptions are based on Linda Nilson’s list of benefits from her book Specifications Grading. A bit ominously, “Data suggest that implementations of specifications grading may not be achieving all of the hypothesized student outcomes.” A quick read-through shows that in specifications-graded classes, students definitely reported lower anxiety (probably not a surprise), found expectations to be much clearer (also not too surprising), but that there was little effect on motivation to learn, perception of clear feedback, or perception that their work reflected class learning outcomes. This is contrary to a lot of anecdotal evidence, so there’s clearly more to study here.
It’s an exciting time to be working with alternative grading. Many of us are motivated to use alternative grading by improving our own classes for students. It’s also worth remembering that we may be improving education for students many years in the future as well.
At the turn of the century, oh dear.
The instructor didn’t give it a name, but he did call for the “abolition of grading.”
Some of the instruction methods of the past can work today. Too many in education want to displace everything, especially because of AI. I still do not believe in the use of blue books to combat AI policing. Anyhow, wonderful to read how your professor's class featured elements that are now considered ungrading.