Discussion about this post

User's avatar
Rebecca Swanson's avatar

Thanks so much for this comment! I struggle with finding the right language. When I was first learning about alternative grading, a colleague at another institution used the term "Mastery-Based Testing." I initially used it too, but, in addition to other problems around the term (see above link in article), mastery felt like too strong. I didn't really expect them to "master" something in a single semester. Proficiency felt "less strong," but maybe still a higher expectation than I should have or can truly verify.

I should mention that it isn't only on the exams that students work on the outcomes - all of their homework problems are labeled by outcome, and some of their engagement work is labeled by outcome. So they aren't demonstrating their ability to solve problems a single time, but rather on at least a few occasions during the semester.

A colleague of mine who as adopted the system has been referring to this system as "Outcomes-Based Testing," which maybe is better, as she isn't claiming student mastery or proficiency, but rather the student ability to complete a problem based on an outcome.

Thanks for the conversation!

Expand full comment
Still lighting learning fires's avatar

I'm always interested to see the way colleagues use terms and obviously we use them in ways that seem accurate and appropriate for our purposes. In this case I'm thinking about the term "proficiency". Your work here brought me back to an analogy I often use when thinking about the difference between grading products and assessing for proficiency. Learning to drive makes the contrast easy to see.

The road test is a performance on a route chosen by the examiner. If you make a mistake, you can take it again—just like in your class model. Passing the written exam and the road test earns you a driver’s license, which is basically the “passing grade.”

But all of us have met drivers out in the wild who clearly passed the test and, at the same time, don’t exactly radiate driving proficiency. That’s because the road test measures how you perform on that specific sequence of tasks on that specific day. It doesn’t capture how you actually drive across real contexts, real conditions, and real time.

True proficiency shows up in a broader, more varied record of behavior—how someone merges, anticipates, adapts, or manages unpredictable situations. Some of the new in-car AI tools that track long-term patterns probably give a better picture of driving proficiency than the official test.

That distinction feels relevant in assessment, too. When evidence is limited to products the instructor defines—exams, problem sets, a specific format—we’re really assessing performance on those products. Useful, important, but limited. When learners can show what they can do in multiple ways—and sometimes have the option to propose how to demonstrate it—we move closer to the kind of capability we might honestly call proficiency.

Maybe it’s time to rethink not just classroom assessments but driver’s licenses, too. A little proficiency assessment on the Kennedy Expressway here in Chicago at rush hour might produce some very different results.

Expand full comment

No posts

Ready for more?