Back in Fall 2021, I wrote about the first time I used “ungrading” in an upper-level Euclidean Geometry class for future teachers (midterm check-in, final review). After a long gap, I taught the class again this past semester. I significantly updated that “ungrading” system, retooling it to focus not only on learning but also on *growth*.

Over the next two weeks, I’ll describe and reflect on that Euclidean Geometry class. This week, I’ll focus on the implementation details and what this approach to assessment looked like on the ground. Next week, I’ll reflect on the successes and failures of the updated assessment system.

This post focuses more on pedagogy than most other posts on this grading-focused blog. That’s partly because pedagogy and assessment are closely linked. But also, this was the first class where I feel that I *almost *succeeded in eliminating student focus on grades. So, much this post will be about what replaced that focus on grades – specifically, what students actually *did*.

Finally, a note about terminology: As you’ve heard from Robert and me many times, the term “ungrading” has many conflicting definitions, and that’s getting worse. I’ve decided to stop using the term entirely and instead just describe what I *do*. So in this post, I’ll be describing an update to the “ungrading” system that I used back in Fall of 2021, but this is also the last time I’ll be calling it “ungrading”.

# What is this class?

Euclidean Geometry is a junior level class required for future secondary (grades 6-12) math teachers, who will likely teach geometry in the future. The course focuses on why the theorems and facts that they learned (and will teach) in high school geometry are true. This can be tricky, because students have already seen these same facts in middle and high school, and so part of my goal is to convince students that there’s a lot more to geometry than meets the eye. That said, these students are *motivated*. The class is directly relevant to their future careers, and they see that connection.

This semester, my class included 14 future teachers who had already taken many math and education-focused classes. Because of that past experience, I could count on my students being willing to work with each other from the very beginning, and being comfortable with active hands-on learning. This was fantastic: I could walk in on the first day of class, say “here’s a problem – group in 3’s and solve it on a whiteboard!”, and *they just did it*. I have my colleagues – especially those who focus on math education – to thank.

# How class works

I structured my Euclidean Geometry class using guided inquiry. Students attempted genuine new problems – “constructions” (steps to create geometric objects using compass and straightedge) and proofs (written logical arguments that justify a geometric idea) – before class. They submitted drafts of their work through our LMS, where they also volunteered to present some or all of their work in class.

I selected presenters from the volunteers, based entirely on whether they had presented recently or not, without worrying about correctness. This was deliberate, and I was up front with my reasoning: Students knew that it was up to them to convince their classmates, but also that the rest of us (myself included) were rooting for them.

Most of class time involved volunteers presenting their work, including discussions and questions for the presenter, until everyone was comfortable with their understanding. During presentations, I sat in the back of the room, giving focus to the students and their work. Otherwise my main job was to carefully ask questions, point out interesting results, scaffold student work so that they could discover key ideas themselves, and generally orchestrate opportunities for students to *do* mathematics.

I worked hard to build a comfortable, supportive environment where productive failure was valued. One of my goals, every time I teach this class, is to develop a shared value of deep understanding. I want us to value not just individual understanding, but collective understanding as well. So, it’s ok to admit if you don’t understand something, to be wrong in front of friends, and to keep asking questions until you fully understand. This aligns nicely with the focus that alternative grading puts on productive failure and feedback loops.

I also talked with students about how this guided inquiry model is relevant for their future careers as teachers. At the most basic level, in this class students learn the bigger picture behind geometric content that they’ll teach in the future. At another level, they practice giving clear explanations and presentations, thinking about their audience, and anticipating questions – as teachers often need to do. At an even higher level – again, I explicitly discuss this with students – the course structure itself illustrates a better way to run and assess a class. In my experience, if someone has a successful experience with some kind of “alternative” pedagogy or assessment method while they are a student, they’re more likely to feel confident using it when they’re a teacher. Students noticed this, and often asked me questions about it.

I recorded who submitted pre-class attempts at problems, and I took detailed notes about presentations and shared them with the presenters afterwards. But nothing I’ve described so far was formally *assessed*. Students completed the work, even without grades, because it mattered: Our entire class was based around this feedback loop of making an attempt, presenting it, and working until we fully understood. Anybody who didn’t engage with those initial steps would have nothing to say or do during class.

I used my notes about prep work and presentations in two ways: First, as a discussion point during check-in meetings, especially if a student *wasn’t* regularly submitting work or volunteering. Second, to help me see a student’s progress towards their *direction for growth*, a key part of the final grade. See below for details of each of these.

# Actual assessments

Assessments were almost entirely focused on “homework”. Every two weeks, I assigned some recent constructions or proofs to be written up formally. These were the same items that students attempted before and presented during class, so by this point they’d had a lot of time to make sense of them. I gave detailed feedback on this written work, both in terms of logical correctness and communication, but I put no grades on anything. My purpose was to help students focus on what I really care about – understanding difficult mathematics and figuring out how to communicate it – without the weird incentives of grades.

Between homeworks we had “revision weeks” in which students could resubmit revised work for additional feedback. Interleaving homework and revision weeks was an important practical change that I made this semester: It allowed us to focus more deeply on one thing at a time. Last time I taught the class, we had homework *and* the option to revise every week, which was simply too much work for students and for me.

The feedback I left on homework made it really clear if there were important mathematical or communication issues. For example:

Unfortunately, the proof has a major problem: the point where the bisector intersects the opposite side might not be the midpoint. (Do you see why? Try it with Geogebra!) You’ll need to find a different way to show that these two triangles are congruent. Take a look at what else is given.

I didn’t need to leave a grade on each assignment to communicate correctness – feedback was a much better way to communicate this! Of course, it’s *always* possible to give more feedback, so I tried to limit feedback to key issues and explicitly asked students to request feedback on more minor mathematical or writing issues.

To emphasize that feedback *really* wasn’t a grade, I *always* left some sort of feedback, even on really excellent work. In that case, I often pushed students in new directions, especially ones that could refine their sense of communication. I enjoyed being deeply engaged with a student’s work and thinking carefully about its strengths and weaknesses, rather than always thinking about what the grade should be.

# Portfolios and final grades

As I mentioned above, I put no grades on any assignments, only feedback. But I couldn’t avoid grades forever: Like most classes, I had to assign final grades at the end. So how did that happen?

Before the semester started, I put together **narrative criteria for each final grade **and shared these in the syllabus. Here’s an outline:

To earn a C, be a good class citizen (e.g. attend as much as possible, complete daily prep, work collaboratively, etc.) and show deep understanding of

*many*key geometric ideas (from a list provided).To earn a B, actively share your ideas with the class, through presenting or a separate written “class journal.” In addition, do everything for a C, including

*more*geometric ideas.To earn an A, show significant growth in a new direction, either presenting or writing. In addition, do everything for a C and B, including

*almost all*of the geometric ideas.

This is just a summary; the syllabus includes more details (including examples of ways to meet each criterion). I also used this description for D and F grades:

There is no description for a D or F, because these grades represent a fundamental breakdown of expectations. A “D” represents a meaningful but unsuccessful attempt at earning a C or above. An “F” represents such a severe lack of engagement, effort, or understanding that there is no evidence of meaningful progress.

The most interesting part of the grade criteria, from my perspective, was the “growth in a new direction” required for an A. This was brand new to me, and I wrote about my hopes and intentions for it early in the semester. I was interested not just in what students accomplished in terms of geometric content, but also in how they *grew *in ways relevant to their future careers as teachers. I’ll have much more to say about the “direction for growth” next time, but overall: It worked. Students created individualized goals to work on, and they very much put their hearts into those goals. The goals helped *me* as well, both in giving better feedback, and in adjusting my own attitude when students were struggling with some part of class. But at the same time, this “direction for growth”, and the need to assign a final letter grade using it, highlighted some of my concerns about how this type of grading works – or indeed how *any* kind of grading works.

Second, **students completed regular check-in meetings and reflections about their progress towards meeting these criteria**. I met with most students at least twice during the semester to talk about their grade progress. Before each meeting, as part of a regular homework assignment, students completed a structured written reflection about their progress and identified specific examples (such as homework problems or presentations) to illustrate that progress. At the meeting, we talked about their responses and about what grade they were “aiming for”. If I disagreed with their self-assessment, or saw areas that they needed to work on, I said so and we made a plan for how to work on it.

I also asked students to complete written mini-reflections every few weeks, in between the “big” reflections. This helped keep them thinking about their goals, and gave me a chance to encourage minor adjustments or give reminders.

At the end of the semester, **students assembled a final portfolio of work along with a reflection in which they suggested a final grade. **I gave detailed instructions for the portfolio, following a similar structure to the check-in meetings and reflections. You can see the full portfolio instructions here. The portfolio included:

Reflections on how the student met each of the criteria for their desired grade. These reflections referred to specific artifacts that were included in the portfolio.

An overall final reflection, in which the student could also propose adjustments to their final grade (e.g. B+ instead of B), with reasoning.

A collection of artifacts that supported their arguments. These artifacts could include written homework, but also written representations of presentations, daily prep, or classwork.

I also asked students to respond to a few other prompts. By far my favorite of these was: “Finish the sentence with another student’s name: I especially appreciated _____, because…” The responses gave me a lot of insight into how class worked from a student perspective, and students impressed me with their responses. They described how other students supported their learning, showed kindness or empathy, and generally were good *people* – grades simply didn’t factor into it.

As with any assignment, the portfolio involved a feedback loop. One of the final homework assignments included a draft a portfolio artifact and the associated reflection, which gave me a chance to give feedback on the details of the portfolio. The last day of class was a portfolio workday, which gave students one last chance for feedback and questions.

The portfolio was due at the final exam time – it *was* the final exam – and I read through them in detail. If a student made their case thoroughly, they earned the grade they claimed. If I disagreed, I reserved the right to adjust the grade up or down based on the evidence they submitted. In the end, I didn’t make any adjustments, although I’ll discuss my thoughts on that next time.

# Next time: growth and reflection

In this post, I’ve focused on the implementation details of this assessment system. You might have a lot of questions – how did it work? What’s that thing about a “direction for growth”? Did students do a good job of self-assessing? Did I disagree with any final grades? All of this will be addressed in next week’s post, which will be *much* more philosophical and reflective than this one. See you then!

When talking with students, I never use the name “ungrading” for my approach – actually, I don’t use *any* name. The same is true for pedagogical structures (“guided inquiry” or “flipping”). I find that giving these items a name is unnecessary and can mislead students who might have encountered similar but not identical systems in other classes, which is a pretty common occurrence at GVSU.

OK, I’ll admit that I did filter volunteers *very slightly* – if I noticed that a solution was so far off the mark that it probably wouldn’t lead to productive discussion (e.g. the only response would be “we have to try something completely different”), then I didn’t choose it. But solutions with errors big or small that could lead to productive discussion were fair game. Students often left comments on their prep assignments along the line of “I don’t think I’m confident enough to present” or “I think there’s a flaw here, but I’d like to share it and see what ideas others have.”

I borrowed this phrasing from somebody, but I can’t remember who. If it was you, please let me know and I’ll give you credit!