# Specifications grading and "help"

### Can students in a specs grading setting do work without help? Do we care?

Recently, a colleague emailed to let me know he was thinking about using specifications grading in his courses this fall, but had some questions. The ensuing back-and-forth forced me to think about some wide-ranging issues about grading that David and I will be writing about here in the future. My colleague had read a series of posts I published at my website in January, especially this one, about the design of my Modern Algebra course, and asked:

How do you know what the student can or cannot do on his/her own (without non-trivial amounts of help)? In general, I like the idea of the proof portfolio; but, do you ever get students who can’t produce a correct, coherent argument without help?

**Background**: Modern Algebra, an upper-division course for mathematics majors, uses a specs grading approach that centers heavily on students assembling a portfolio of their work on proof-based problems in abstract algebra (the “proof portfolio”). As with any specs grading implementation, if a student turned in work that wasn’t up to specs, they got feedback and the opportunity to revise and resubmit, iterated until the standard was met.

I think my colleague’s questions are important and good. Here’s a summary of my reasoning through them.

* How do I know what the student can or cannot do without nontrivial amounts of help? *Like a true mathematician, I’ll begin by questioning the premise and the terms of the question itself. First, what do we mean by “help”, and how much of it is a “nontrivial amount”?

There are some obvious cases where the type and amount of help are not academically acceptable. For example, a student plagiarizing an essay or proof and turning it in as their own work is “getting help” of the wrong sort and in way too great of a quantity. Or, a student in an IBL class who is unable to put their solution to a problem on the board without having a friend coach them through every step, is also getting too much help.

But once we step away from the obvious, things get very murky very quickly. Suppose that student in the IBL class *does* put their solution on the board and fields reasonable questions about it, but only because they practiced it the night before with a friend. Is *that* “too much help”? Or, suppose the other student didn’t plagiarize the essay or proof but instead turned in work that was their own, and it didn’t meet quality standards, but then they got feedback and revised and resubmitted until it did. Is *that* “too much help”?

If I look at the end product — the solution on the board, the proof or essay — how do I know how much of that product is the student’s thinking and not influenced heavily by another? Where does the student stop, and their sources of “help” begin?

**In one sense, once you get past the obvious academic-dishonesty level situations, this question is unanswerable. **When I look at a student’s work, if it’s not violating academic honesty rules, it’s simply not possible to disassociate the student’s thinking from that of the others with whom the student has interfaced to help their thinking (which is one way to think about “getting help”). A social constructivist view of learning would agree, since through that lens, there is simply no such thing as “working without help” because all knowledge is built through interaction with others (another way to think about “getting help”). So I can evaluate what the student *does* and whether it meets specifications; but I can’t determine how much of that is only from the student, in fact it’s possible that *none* of it truly is “from the student” and not significantly rooted in other people and resources.

But I could be wrong. Maybe it’s possible to set up a situation where the amount and types of help available to students are controlled for, which would give a clearer insight to what the student can or cannot do on their own. Well, in fact, I know that this is possible, and we’ve been doing it for centuries in the form of **timed assessments. **

The whole point of a timed assessment — one-and-done, without the possibility of revision/resubmission later, since that would be a form of “getting help” — is to limit access to time and resources, to see what students “really know”. But let’s consider the consequences of that approach:

Timed assessments without the possibility of revision ramp up anxiety levels, triggering a fight-or-flight response that adversely affects cognition — effectively corrupting the data about learning collected by the assessment. (David wrote about some related research on this last week.)

Timed assessments tend to be based on memory and recall, which can be different than “knowledge”. When I give a quiz over the Chain Rule in calculus, what I’m

*really*giving is at least as much of a quiz over students’ ability to*remember*the Chain Rule, as it is their ability to apply it. If you believe this is splitting hairs over semantics, try giving two versions of such a quiz, one with notes allowed and the other without.Timed assessments are notoriously brittle with respect to logistical issues, something we all experienced during the COVID-19 pandemic as students had to enter quarantine and deal with their lives outside class.

So we’re kidding ourselves if we think that one-and-done timed testing controls for the use of outside help without letting in all sorts of other confounding variables. You *might* address the issue of students not being able to do work without help. But doing so, introduces more problems than it mitigates, and it’s questionable whether the data you receive really measure what you think it measures. Even if it looks like independent work, it might just be help on a time delay.

That gets me to another angle on this question.* Is this idea of students being able to do work without significant amounts of help something we even want in the first place?*

The word “independence” plays a big role here. Maybe it’s not so much that we want students to build knowledge of the subject without any interfacing with another human being at all — that’s probably not possible. All we want, maybe, is for students to be able to demonstrate their fluency without a lot of explicit coaching. *Independently*, in other words.

In my conversation with my colleague, one of the things I noticed was that in my department’s official syllabus of record for Modern Algebra, there is no course outcome regarding the idea of “independence”. That doesn’t mean we find this unimportant, just that it’s not something we instructors are charged with assessing. So, I don’t. My course objectives and the module-level objectives that flow from them address goals about writing, problem-solving, thinking with abstraction, and so on. But I make no claim that students should do this “independently” (apart from not breaking the academic honesty rules) and I don’t have any mechanism for demonstrating independence. Until this email discussion, I’d honestly never even thought about it.

People learn by engaging with feedback loops. This is a fundamental truth about humanity and the core idea of mastery-based grading. So while there is value in “independent work”, there is *more* value in students demonstrating the ability to take help that is offered and use it to grow. If a student can demonstrate independent thinking, but cannot grow through feedback — through *help* — then I do not consider that a success. But if the student *can* show evidence of growth through feedback loops, that *is* a success, and whether or not the student struggles to enter the loop “independently” is simply not that important.

**But**: It’s certainly not wrong to value independence, however you may operationalize it, and make it something you *do* assess. If you really wanted to measure whether students can do work independently, I’d suggest making it into a **specs grading bundle**. Include in this “Independence Bundle” whatever materials you feel like you need to determine that students can do work without lots of explicit coaching. Perhaps to complete the bundle, students have to pass four oral exams in your office during the semester; or a certain number of learning targets have to be mastered using timed assessments (although, see above); or whatever. Take my colleague’s question and turn it around: *What evidence can students show you that they have attained independent mastery?* Then, ask for it, and evaluate it according to specifications.

Likewise, am I saying that we shouldn’t use timed assessments? No. I’ll be doing it myself this fall. But if you do, then those timed assessments should be part of a well-balanced diet of assessments; the timed assessments might target specific learning outcomes on the low end of Bloom’s Taxonomy, supplemented with other, untimed assessments that target the top end. Students who don’t meet the specifications on their timed assessments could have the option to take a similar problem on a later quiz, or do an oral exam or make a video of themselves instead. And every reattempt, timed or otherwise, should require students to do metacognitive work, to show that they have thought about their previous work and the feedback it receives.

And ultimately, the instructor is the final arbiter of how much help is “too much”. How much help are you willing to give to a student, or allow a student to incorporate, and still be satisfied that their work demonstrates individual mastery of the topic? That’s the limit, and give all the help in the world to that student up to that point — and no farther.

My colleague asked a second question: * Do you ever get students who can’t produce a correct, coherent argument without help? *My answer might have seemed flip:

*All the time, and I’m one of them.*Ask my students, and they’ll gleefully tell you that I never do even a basic arithmetic calculation during class unless I check it on Wolfram|Alpha at the same time. I’ve lived long enough to have a deep mistrust of my supposed independence. But I

*do*trust my ability to learn things. In the end, the whole point of specifications grading and other methods like it is to build

*that*skill, of seeking out and taking help and making it truly

*helpful*for yourself.

*Click here to receive Grading for Growth in your inbox, every Monday. *

You make good points that Modern Algebra students shouldn't necessarily be able to write profs with no help whatsoever. Maybe this is a better way to rephrase the question: "How do we know that the student could make a useful mathematical contribution to a team working on modern algebra problems?" That wouldn't necessarily need to be a "one-and-done" timed assessment. It also wouldn't necessary to have an expectation of perfection. It would just need to provide evidence that the student has a reasonable understanding of the principles of Modern Algebra.

Interesting post, as always. You seemed to be arguing hard against the idea of timed assessments, but then you admitted that you still use them, so I guess you were just making the strongest argument possible for the sake of airing the key issues. I do want to push back on some of those arguments, though. Yes, timed assessments cause anxiety, but so does almost any kind of assessment. I think your argument would have to be that the amount of increase in student anxiety is not worth the additional information gained by the instructor, or something like that. Regarding the assumption that time assessments test low-level cognition, that can easily be remedied by letting students use notes (as you said) and/or by giving students clear advance indications of which facts they'll need to know, so that the focus of the test becomes one of applying those facts well. The latter can be done via various study-guide options such as via Ben Wiggins' Public Exams or my own Test Question Templates (TQTs). Both of these originated in biology but should also work for other STEM subjects, perhaps even math. (Their creators would like to think so, anyway!)