What do you mean by "best"? 😁 Seriously though, the answer to that question depends on what you want to know about, and to some degree the discipline you're in. In Physics for example they have the Force Concept Inventory which has been used for 25 years now as a means of quantitatively measuring conceptual knowledge in physics. There a…
What do you mean by "best"? 😁 Seriously though, the answer to that question depends on what you want to know about, and to some degree the discipline you're in. In Physics for example they have the Force Concept Inventory which has been used for 25 years now as a means of quantitatively measuring conceptual knowledge in physics. There are similar concept inventories in other disciplines, of varying degrees of quality and statistical validation. But maybe this isn't quite what you had in mind, and instead you want to see if students perceptions and attitudes about your subject area have changed -- in which case there are instruments for this as well.
And are quantitative data really better, from an explanatory standpoint, than qualitative in situations like this? Quantitative data can be a bit like grades themselves, having the illusion of scientific precision just because they're numbers, but in reality they tell you less than it appears.
Fair questions. I suppose I'm thinking most about the scenario of trying to convince a slightly old-school colleague to try something and having the colleague demand evidence of improved learning (not just perceptions of a better environment, better attitudes, etc.). I think you're saying that validated concept inventories are the best tool we have for that sort of situation with that sort of need.
I agree, concept inventories are a great place to start. Also, grades are *not* a great place to start. Not because of any "grade inflation" bogeymen so much, but rather because there's no real reason to think that final grades from two completely different systems should be comparable at all. They may go up or down, and learning may go up or down, but those are not necessarily linked.
Also, asking students is a legitimate way to approach this. Students can report accurately about their study habits, the incentives they feel, and their learning experience.
Finally, separate from learning, we have pretty good data that alternative grading decreases test anxiety and reduces cheating. Even if those were the only changes (and learning stayed the same), that would already be a big win.
Yep -- definitely not intending to dismiss students' experiences of their learning, just noting that, to a certain subset of instructors, such evidence might be less compelling than "scores on test T rose by X%"....
The key is that it does compare apples to apples -- that is, it compares the grades of students within the *same* class, but who had taken a *prerequisite* class under different grading systems (some traditional, some alternative). The students who had completed the alt-grading prerequisite averaged 0.25-0.33 grade points higher in the traditionally graded follow-up class.
What do you mean by "best"? 😁 Seriously though, the answer to that question depends on what you want to know about, and to some degree the discipline you're in. In Physics for example they have the Force Concept Inventory which has been used for 25 years now as a means of quantitatively measuring conceptual knowledge in physics. There are similar concept inventories in other disciplines, of varying degrees of quality and statistical validation. But maybe this isn't quite what you had in mind, and instead you want to see if students perceptions and attitudes about your subject area have changed -- in which case there are instruments for this as well.
And are quantitative data really better, from an explanatory standpoint, than qualitative in situations like this? Quantitative data can be a bit like grades themselves, having the illusion of scientific precision just because they're numbers, but in reality they tell you less than it appears.
Fair questions. I suppose I'm thinking most about the scenario of trying to convince a slightly old-school colleague to try something and having the colleague demand evidence of improved learning (not just perceptions of a better environment, better attitudes, etc.). I think you're saying that validated concept inventories are the best tool we have for that sort of situation with that sort of need.
I agree, concept inventories are a great place to start. Also, grades are *not* a great place to start. Not because of any "grade inflation" bogeymen so much, but rather because there's no real reason to think that final grades from two completely different systems should be comparable at all. They may go up or down, and learning may go up or down, but those are not necessarily linked.
Also, asking students is a legitimate way to approach this. Students can report accurately about their study habits, the incentives they feel, and their learning experience.
Finally, separate from learning, we have pretty good data that alternative grading decreases test anxiety and reduces cheating. Even if those were the only changes (and learning stayed the same), that would already be a big win.
Yep -- definitely not intending to dismiss students' experiences of their learning, just noting that, to a certain subset of instructors, such evidence might be less compelling than "scores on test T rose by X%"....
Oh yes, for sure. Maybe the best data I know on this is 4th (and final) one I mention here: https://gradingforgrowth.com/p/four-key-research-results
The key is that it does compare apples to apples -- that is, it compares the grades of students within the *same* class, but who had taken a *prerequisite* class under different grading systems (some traditional, some alternative). The students who had completed the alt-grading prerequisite averaged 0.25-0.33 grade points higher in the traditionally graded follow-up class.