For those of us who appreciate quantitative evidence that students "learn more" with alternative grading methods, what do you all think is the best evidence that could reasonably be collected?
For myself, I "know" what I'm doing now is better for students than what I used to do, but since my tests and lectures and study materials have all changed in parallel, and since my learning objectives are somewhat different now, it's hard to make a strong evidence-based case (and I don't feel like teaching half of the class with methods that seem suboptimal just for the sake of getting comparison data -- though maybe some of that is needed or would be useful?).
What do you mean by "best"? 😁 Seriously though, the answer to that question depends on what you want to know about, and to some degree the discipline you're in. In Physics for example they have the Force Concept Inventory which has been used for 25 years now as a means of quantitatively measuring conceptual knowledge in physics. There are similar concept inventories in other disciplines, of varying degrees of quality and statistical validation. But maybe this isn't quite what you had in mind, and instead you want to see if students perceptions and attitudes about your subject area have changed -- in which case there are instruments for this as well.
And are quantitative data really better, from an explanatory standpoint, than qualitative in situations like this? Quantitative data can be a bit like grades themselves, having the illusion of scientific precision just because they're numbers, but in reality they tell you less than it appears.
Fair questions. I suppose I'm thinking most about the scenario of trying to convince a slightly old-school colleague to try something and having the colleague demand evidence of improved learning (not just perceptions of a better environment, better attitudes, etc.). I think you're saying that validated concept inventories are the best tool we have for that sort of situation with that sort of need.
I agree, concept inventories are a great place to start. Also, grades are *not* a great place to start. Not because of any "grade inflation" bogeymen so much, but rather because there's no real reason to think that final grades from two completely different systems should be comparable at all. They may go up or down, and learning may go up or down, but those are not necessarily linked.
Also, asking students is a legitimate way to approach this. Students can report accurately about their study habits, the incentives they feel, and their learning experience.
Finally, separate from learning, we have pretty good data that alternative grading decreases test anxiety and reduces cheating. Even if those were the only changes (and learning stayed the same), that would already be a big win.
Yep -- definitely not intending to dismiss students' experiences of their learning, just noting that, to a certain subset of instructors, such evidence might be less compelling than "scores on test T rose by X%"....
The key is that it does compare apples to apples -- that is, it compares the grades of students within the *same* class, but who had taken a *prerequisite* class under different grading systems (some traditional, some alternative). The students who had completed the alt-grading prerequisite averaged 0.25-0.33 grade points higher in the traditionally graded follow-up class.
The book club is a great idea! I wish there was a way to join the book club for those of us who have already purchased the book. I currently am leading a faculty reading group with Grading for Growth, and we purchased 6 copies of the book for faculty.
People who have purchased the book can join the Perusall group -- I think there is some way to do this at least -- but unfortunately this is the last week for it. You could still interact though. I think it would also be possible for you to make your own Perusall group, since the Grading For Growth book is available in the Perusall database. Don't quote me on that though
I'm interested in a conversation around alternative grading and wellness - for both faculty and students. We often hear why it's good for students ... but why is it good for faculty.
+1 to all of what others said. Pragmatically, for me grading with specs grading is just easier. Since I am assessing small, atomic-level skills (my learning objectives) each one of those is very simple and quick to grade. The tradeoff is that there are more of them to grade. But, if each individual unit is small and simple, it's easier for me to fit grading into a daily routine. Like, if I have 15 minutes, I can grade one learning objective and it's done, as opposed to when I gave hour-long tests and I couldn't split the project up until small simple subtasks as easily.
In addition to the good answers from David and jwr, I'd just add that (as a biology instructor) I used to have lots of angst around writing tests. I was always aiming for an elusive sweet spot where the test questions were related enough to in-class practice questions to be fair, but not exactly the same (so that they couldn't just repeat previous answers verbatim). Switching to something like standards-based grading has relieved most of this angst because I'm now much more confident that I have communicated the standards clearly and am sticking to those appropriately.
Nice question - I can tell you some benefits that I've noticed in my own teaching, and things I've heard from others:
* Office hours become SO MUCH better. Students know that your feedback matters and that they have an opportunity to improve, so they tend to use office hours more often, ask better questions, and focus on learning over trying to get back a point or two. Office hours have become genuinely enjoyable.
* In general, if implemented well, alternative grading can help improve the student-teacher relationship across the board. It's less antagonistic and more collaborative.
* Likewise, giving helpful feedback *feels* much more useful than worrying about assigning just the right number of points. It lets me focus on what I really care about, not on the mechanics of points and partial credit.
* Setting clear standards forces you to think carefully about what really matters. This can lead to some big changes in what you actually assess. The first time I took an exam and tried to align it to my (new) list of standards, I was mortified at how out-of-alignment it was. Too much of this, none of that... my assessments are much more sensible, balanced, and focused now.
* Grades just plain make more sense. They don't feel like a numbers game; they have actual meaning. That's a huge benefit for students, but also has a psychological benefit for *me* when setting final grades.
Those are just the first that come to my mind. I'm curious to hear what others say too!
Really good question. I think it depends a lot on how you do it and the context that you're doing it in. Speaking as a community college writing professor who primarily teaches in person, I've found ways of ungrading that have made my work a lot more sustainable.
There are two main things that I would point to. One has to do with the way that I teach the writing process and specifically the way that I handle work in progress. Basically, I use what I call a "useful progress" standard when looking at in-process work, which means that if a student is making useful progress they're doing what they need to be doing and should get credit for doing it. This lets me focus on responding to where students are and helping them think about possible next steps. (In other words, formative assessment.)
The other has to do with shifting to what I think of as a "studio" approach. This means that a significant part of our class time is set up somewhat like a studio art class. This gives students time to work on their projects, and it gives me time to check in with them individually or in small groups. With twenty students in a class and a couple of mini-lectures a week, I find that I can usually spend a few minutes working one-on-one with each student every week. This enables me to do a significant amount of alternative assessment. (For example, employing the "useful progress" standard, I can give students credit for being able to talk me through what they've been working on, or for working with me to troubleshoot a problem they've been having, even if their learning isn't fully evident in their written work.) It also enables me to provide a significant amount of ongoing in-person feedback. Compared with written comments on drafts, I have found this in-person feedback to be both far more helpful for students and far more efficient for me. Honestly, it's been key to keeping my workload sustainable.
But it's not just that these forms of ungrading make my work more efficient (though given the realities of a 5+ course load, that's an important consideration). It's that they make my teaching better (ie, more effective in terms of supporting student learning) and my working life much happier.
I love this idea of "useful progress." I also teach writing, and I find that every student is in a different place, with different skill sets and strengths/weaknesses to consider. Given the wide spectrum of abilities and areas of growth students exhibit in writing, how do you articulate exactly what "useful progress" means or looks like, and how do you track it across a semester?
That's a great question! In response, I want to start by underlining part of what you said: "I find that every student is in a different place, with different skill sets and strengths/weaknesses to consider."
This is so true, and with this in mind, I think it's important to recognize that "useful progress" may look different for different writers working on different projects at different points in time. So, for me, the challenge is really to understand what progress looks like *for particular students in particular situations*.
A significant part of this understanding emerges from conversations with students (though of course I do have an overall strategy for the work that we're doing over the course of the term). When I do check-ins with students, we're essentially collaborating to understand where the student is making progress and identify ways that they can continue to build on that.
In terms of how I track progress over the course of the semester, I guess I'd say that there are two basic things that I do. The first is a pretty simple checklist. Except for the final project, just about everything we do is what I consider work in progress, which means that students earn credit for having usefully engaged with the task.
The other thing that I try to do is compose a story about my students' work and learning. The on-paper form of this story might look like a scribbly mess of a mind map, and it's not necessarily a major factor in my grading, but it helps me form a picture of the students and their work over the course of the semester.
For those of us who appreciate quantitative evidence that students "learn more" with alternative grading methods, what do you all think is the best evidence that could reasonably be collected?
For myself, I "know" what I'm doing now is better for students than what I used to do, but since my tests and lectures and study materials have all changed in parallel, and since my learning objectives are somewhat different now, it's hard to make a strong evidence-based case (and I don't feel like teaching half of the class with methods that seem suboptimal just for the sake of getting comparison data -- though maybe some of that is needed or would be useful?).
What do you mean by "best"? 😁 Seriously though, the answer to that question depends on what you want to know about, and to some degree the discipline you're in. In Physics for example they have the Force Concept Inventory which has been used for 25 years now as a means of quantitatively measuring conceptual knowledge in physics. There are similar concept inventories in other disciplines, of varying degrees of quality and statistical validation. But maybe this isn't quite what you had in mind, and instead you want to see if students perceptions and attitudes about your subject area have changed -- in which case there are instruments for this as well.
And are quantitative data really better, from an explanatory standpoint, than qualitative in situations like this? Quantitative data can be a bit like grades themselves, having the illusion of scientific precision just because they're numbers, but in reality they tell you less than it appears.
Fair questions. I suppose I'm thinking most about the scenario of trying to convince a slightly old-school colleague to try something and having the colleague demand evidence of improved learning (not just perceptions of a better environment, better attitudes, etc.). I think you're saying that validated concept inventories are the best tool we have for that sort of situation with that sort of need.
I agree, concept inventories are a great place to start. Also, grades are *not* a great place to start. Not because of any "grade inflation" bogeymen so much, but rather because there's no real reason to think that final grades from two completely different systems should be comparable at all. They may go up or down, and learning may go up or down, but those are not necessarily linked.
Also, asking students is a legitimate way to approach this. Students can report accurately about their study habits, the incentives they feel, and their learning experience.
Finally, separate from learning, we have pretty good data that alternative grading decreases test anxiety and reduces cheating. Even if those were the only changes (and learning stayed the same), that would already be a big win.
Yep -- definitely not intending to dismiss students' experiences of their learning, just noting that, to a certain subset of instructors, such evidence might be less compelling than "scores on test T rose by X%"....
Oh yes, for sure. Maybe the best data I know on this is 4th (and final) one I mention here: https://gradingforgrowth.com/p/four-key-research-results
The key is that it does compare apples to apples -- that is, it compares the grades of students within the *same* class, but who had taken a *prerequisite* class under different grading systems (some traditional, some alternative). The students who had completed the alt-grading prerequisite averaged 0.25-0.33 grade points higher in the traditionally graded follow-up class.
The book club is a great idea! I wish there was a way to join the book club for those of us who have already purchased the book. I currently am leading a faculty reading group with Grading for Growth, and we purchased 6 copies of the book for faculty.
Hi Deborah - please email me: clarkdav (at) gvsu (dot) edu, and I'll see if we can figure something out.
People who have purchased the book can join the Perusall group -- I think there is some way to do this at least -- but unfortunately this is the last week for it. You could still interact though. I think it would also be possible for you to make your own Perusall group, since the Grading For Growth book is available in the Perusall database. Don't quote me on that though
I'm interested in a conversation around alternative grading and wellness - for both faculty and students. We often hear why it's good for students ... but why is it good for faculty.
+1 to all of what others said. Pragmatically, for me grading with specs grading is just easier. Since I am assessing small, atomic-level skills (my learning objectives) each one of those is very simple and quick to grade. The tradeoff is that there are more of them to grade. But, if each individual unit is small and simple, it's easier for me to fit grading into a daily routine. Like, if I have 15 minutes, I can grade one learning objective and it's done, as opposed to when I gave hour-long tests and I couldn't split the project up until small simple subtasks as easily.
In addition to the good answers from David and jwr, I'd just add that (as a biology instructor) I used to have lots of angst around writing tests. I was always aiming for an elusive sweet spot where the test questions were related enough to in-class practice questions to be fair, but not exactly the same (so that they couldn't just repeat previous answers verbatim). Switching to something like standards-based grading has relieved most of this angst because I'm now much more confident that I have communicated the standards clearly and am sticking to those appropriately.
Nice question - I can tell you some benefits that I've noticed in my own teaching, and things I've heard from others:
* Office hours become SO MUCH better. Students know that your feedback matters and that they have an opportunity to improve, so they tend to use office hours more often, ask better questions, and focus on learning over trying to get back a point or two. Office hours have become genuinely enjoyable.
* In general, if implemented well, alternative grading can help improve the student-teacher relationship across the board. It's less antagonistic and more collaborative.
* Likewise, giving helpful feedback *feels* much more useful than worrying about assigning just the right number of points. It lets me focus on what I really care about, not on the mechanics of points and partial credit.
* Setting clear standards forces you to think carefully about what really matters. This can lead to some big changes in what you actually assess. The first time I took an exam and tried to align it to my (new) list of standards, I was mortified at how out-of-alignment it was. Too much of this, none of that... my assessments are much more sensible, balanced, and focused now.
* Grades just plain make more sense. They don't feel like a numbers game; they have actual meaning. That's a huge benefit for students, but also has a psychological benefit for *me* when setting final grades.
Those are just the first that come to my mind. I'm curious to hear what others say too!
Really good question. I think it depends a lot on how you do it and the context that you're doing it in. Speaking as a community college writing professor who primarily teaches in person, I've found ways of ungrading that have made my work a lot more sustainable.
There are two main things that I would point to. One has to do with the way that I teach the writing process and specifically the way that I handle work in progress. Basically, I use what I call a "useful progress" standard when looking at in-process work, which means that if a student is making useful progress they're doing what they need to be doing and should get credit for doing it. This lets me focus on responding to where students are and helping them think about possible next steps. (In other words, formative assessment.)
The other has to do with shifting to what I think of as a "studio" approach. This means that a significant part of our class time is set up somewhat like a studio art class. This gives students time to work on their projects, and it gives me time to check in with them individually or in small groups. With twenty students in a class and a couple of mini-lectures a week, I find that I can usually spend a few minutes working one-on-one with each student every week. This enables me to do a significant amount of alternative assessment. (For example, employing the "useful progress" standard, I can give students credit for being able to talk me through what they've been working on, or for working with me to troubleshoot a problem they've been having, even if their learning isn't fully evident in their written work.) It also enables me to provide a significant amount of ongoing in-person feedback. Compared with written comments on drafts, I have found this in-person feedback to be both far more helpful for students and far more efficient for me. Honestly, it's been key to keeping my workload sustainable.
But it's not just that these forms of ungrading make my work more efficient (though given the realities of a 5+ course load, that's an important consideration). It's that they make my teaching better (ie, more effective in terms of supporting student learning) and my working life much happier.
I love this idea of "useful progress." I also teach writing, and I find that every student is in a different place, with different skill sets and strengths/weaknesses to consider. Given the wide spectrum of abilities and areas of growth students exhibit in writing, how do you articulate exactly what "useful progress" means or looks like, and how do you track it across a semester?
That's a great question! In response, I want to start by underlining part of what you said: "I find that every student is in a different place, with different skill sets and strengths/weaknesses to consider."
This is so true, and with this in mind, I think it's important to recognize that "useful progress" may look different for different writers working on different projects at different points in time. So, for me, the challenge is really to understand what progress looks like *for particular students in particular situations*.
A significant part of this understanding emerges from conversations with students (though of course I do have an overall strategy for the work that we're doing over the course of the term). When I do check-ins with students, we're essentially collaborating to understand where the student is making progress and identify ways that they can continue to build on that.
In terms of how I track progress over the course of the semester, I guess I'd say that there are two basic things that I do. The first is a pretty simple checklist. Except for the final project, just about everything we do is what I consider work in progress, which means that students earn credit for having usefully engaged with the task.
The other thing that I try to do is compose a story about my students' work and learning. The on-paper form of this story might look like a scribbly mess of a mind map, and it's not necessarily a major factor in my grading, but it helps me form a picture of the students and their work over the course of the semester.