As Robert and I write this blog, we are also writing a book about alternative assessment systems titled “Grading for Growth.” The core of the book will be case studies that feature instructors and how they use various alternative assessment systems in their classes.
Today, we bring you a preview of our book’s first case study. It’s organized around the four pillars of alternative assessments and shows how one instructor uses Standards-Based Grading (SBG). You may want to review the links above to refresh these ideas.
We would love to hear your feedback on this case study. Do you find it helpful? Is there something more you’d like to know? Are there unnecessary extras we added in? Let us know in the comments!
Joshua Bowman has been an assistant professor at Pepperdine University, a small liberal arts college near Malibu, California, for 6 years. During that time, Bowman has refined and revised a Standards-Based Grading system that he has found to be especially flexible in extreme circumstances.
Each of Bowman’s Calculus 1 classes have around 25 students. Students take part in group discussions, activities, and whiteboard exercises. This is a common introductory course that is required for many STEM majors.
At first, the assessments look fairly traditional: The main mode of assessment is in-class exams. However, once you start taking a look at the details, you can see how Bowman uses Standards-Based Grading (SBG) to build in opportunities for reflection and growth.
Everything — assessments as well as class and practice problems — center around one or more tasks. That’s Bowman’s name for “standards,” and it does a good job of describing what students do: They complete tasks, consisting of one or more related problems, that demonstrate their ability to use the tools of Calculus in various ways.
The tasks are divided into three categories: Core tasks cover fundamental ideas in Calculus, including both calculations and concepts. Auxiliary tasks focus on an understanding that goes beyond the basics and can range from interpretation up through complex problems that involve multiple skills. Modeling tasks involve creating, interpreting, or analyzing mathematical models that use techniques from Calculus.
There are 37 tasks in total — a high number for an SBG class, although this number varied as Bowman refined his system. Other standards-based classes typically vary from 20 to 30 standards. Here are some examples, along with the short “code” that helps identify them on assessments:
Core tasks (14 total)
C.1: Find and interpret the average rate of change of a quantity
C.7: Find derivatives (including higher-order) of a polynomial function
C.14: Find particular values of the antiderivative of a given function, by using integrals.
Auxiliary tasks (20 total)
A.9: Estimate a derivative by approximation using a table or chart
A.10: Determine the units and physical interpretation of a derivative in an applied context
A.16: Solve a related rates problem
Modeling tasks (all 3)
M.1: Determine how a quantity varies over time (increase, decrease, periods of constancy, points where maximum or minimum value is reached) given its rate of change
M.2: Identify the objective function and constraints in a situation involving maximization or minimization; find and interpret conditions under which the desired extreme value occurs
M.3: Find and interpret the total amount of change in a quantity over time from its instantaneous rate of change function
As in many alternatively assessed classes, there are more exams than you might expect: 5 exams throughout the semester, plus a final exam. Having many smaller exams helps to spread out the workload, reduce test anxiety, and give students multiple opportunities to show their learning. Each exam covers 8 to 11 tasks, with each task being assessed by one or two related questions.
Here’s an example of how task C.14 (which we saw above) might be assessed on an exam:
And another example, of task A.9:
Notice that the problems are always labeled clearly with the task’s name and description. There’s no mystery about what is being assessed. The expectation, set from the beginning of class, is that students must show not only final solutions, but work and explanation that helps the grader understand them.
Students don’t receive an overall grade on an exam. Instead, they earn a mark on each task separately, as well as detailed feedback. These marks indicate progress: Successful, Minor revisions needed, New attempt required, or Incomplete/Insufficient. Only “Successful” marks count in the final grade. The rest of the marks, together with feedback, indicate what the student’s next step should be.
Tasks that earn Minor revisions needed have a small arithmetic error, a miscopied value, or a similar error that is important to fix, but not central to the student’s understanding of the task. Students have 3 days to submit a “Revision Form” in which they explain the error and how to correct it, which can then turn this mark into Successful. This form has two prompts:
Here’s the mistake I made on my last assessment of this task:
(Be thoughtful in your response; don’t just list the error, explain the thinking process that led to it.)Here’s my correction of the solution, and how my understanding has improved by making this revision:
A mark of New attempt required indicates that a student has made some useful progress, but has a critical error. Incomplete or Insufficient are reserved for solutions that don’t show enough work to evaluate. Each of these marks require the task to be reattempted, showing fresh work on a new problem. Students can reattempt two tasks per week during office hours, a process that involves submitting a request 24 hours in advance, and then visiting Bowman in his office to attempt a new version of the task. Bowman also offers a “reassessment day” at the end of the semester, and often another one in the middle of the semester. These are class days where students can reattempt any tasks that they need to, and also act as a break in new content.
As we will see later, many instructors choose to include new attempts on future exams, to help manage office hour traffic. Bowman focuses on “live” reattempts, which gives a chance for an interactive conversation and guiding questions. He notes that this can overwhelm office hours. He found that pre-selecting the new problems before the semester even begins, and having copies of them ready in a folder in his office, have helped make this a workable system for him.
This distinction between revision and a new attempt helps students understand the relative importance of mistakes, guarantees that students can solve problems fully from scratch without important errors, and saves work for both instructor and students.
Students only need to complete a task successfully once to earn credit for it. However, the final exam is a “recertification” of a selection of 10 of the core tasks that everyone must attempt. This is a common approach to final exams, and helps guarantee that students retain core skills. It can also be a way to include a required common final exam. In this case, Bowman chooses the 10 tasks and announces them well in advance. You might also survey students to see which tasks they most want to appear, or base the choice on which tasks most students still need to complete.
In any case, this recertification approach tends to make the final exam higher-stress, and Bowman tells students that he broadens the definition of “successful” on the final exam to include minor errors. Completing a certain number of these “recertified” core skills on the final exam is a requirement for higher grades, as we will see below. Some instructors take a simpler approach, by making the final exam simply “one last chance” to earn each standard, with no special recertification requirements.
Beyond the traditional exams, students also complete a “gateway exam.” This is a way to ensure that students are fluent in computational skills that form the basis for higher-order work in this and future classes. Students must “pass through the gateway” in order to pass the class with a B- or better. The gateway exam includes 10 direct computational questions that cover derivative calculations–a type of calculation commonly taught at the beginning of Calculus. While these kinds of skills could be tested through tasks, the gateway exam setup provides special focus on these computations alone. This leaves the tasks to represent more complex computations and concepts, many of which use the computations covered in the gateway. The gateway exam is graded “Pass” or “Not yet”: Students must complete 9 out of 10 problems correctly to “Pass”. Students have five attempts to earn “Pass” (although after the first one, they may need to complete them outside of class), with each attempt containing a new set of computational questions. This is in the spirit of our fourth pillar: reassessments without penalty.
The last component in Bowman’s assessment system is homework, which is effectively “ungraded.” Bowman reports that this is a very effective approach for his students. Homework is purely formative: A chance for students to practice with ideas from class, before they are formally assessed on an exam. Because different students need to practice different things, weekly homework assignments include a large number of problems that cover many different tasks. Bowman makes these assignments flexible, for example: “choose 5 problems that you need to practice, from #1–20 in the textbook.” Students don’t necessarily submit their work, however. Instead, they submit a homework report. The report describes which problems the student attempted, why they chose those, and any remaining questions they have about them. Bowman or a grader respond to the questions, but all that is recorded is completing the report: The problems themselves are not graded. The goal is both to give students practice without pressure, and to encourage metacognition.
Bowman reports that this keeps students actively engaged with relevant practice, and is more effective than his previous attempts at letting students do (or not do) homework as they wished. As Bowman says, “These reports get students to keep up with practicing the material, and the attention they get from the grader is targeted to the specific questions each student has.”
The syllabus includes a list of requirements for each letter grade, and all requirements must be satisfied to earn that grade. For example, to earn an A, a student must:
Pass the Gateway Exam.
Submit all 20 homework reports.
Complete all 14 core tasks.
Complete all 20 auxiliary tasks.
Complete all 3 modeling tasks.
Complete 9 core tasks on the final exam.
This is a high bar! On the other hand, an A indicates excellent work. Students can and do stretch to reach a high bar like this, given support and encouragement.
To earn a B, students must:
Pass the Gateway Exam.
Submit 15 (of 20) homework reports.
Complete 12 (of 14) core tasks.
Complete 17 (of 20) auxiliary tasks.
Complete 2 (of 3) modeling tasks.
Complete 7 core tasks on the final exam.
To earn a C, students must:
Submit 10 homework reports.
Complete 10 (of 14) core tasks.
Complete 15 (of 20) auxiliary tasks or pass the Gateway Exam and complete 11 auxiliary tasks.
Complete 1 (of 3) modeling task.
Complete 5 core tasks on the final exam.
“In between” results lead to +/- grades, and the rules for applying these are spelled out in the syllabus. For example, “if all criteria for a letter grade are met as well as two of those for a higher letter grade, then a plus will be added.”
It’s important to include specific policies in your syllabus, like the ones above, to describe what happens if students are “in between” grades. Watch out for and intervene early with students who seem to be progressing unevenly in one category.
These grade criteria demonstrate some common choices for instructors using SBG. Grades can be based on how many standards are completed, but they can also be based on which standards, thus focusing higher grades on specific skills (this is done by requiring specific numbers of core, auxiliary, and modeling tasks). Another choice, which we will see later, is to consider how often standards have been met. In Bowman’s case, “once” is the requirement. Thus, each assessment of a task must be written to thoroughly address the requirements of that task.
How well does this work? Bowman reports three major benefits. First, the tasks give students greater clarity about the instructor’s expectations, how they can meet them, and how each assignment fits in with these expectations.
Second, the assessment system drives higher quality conversations with students. Office hour and email conversations tend to be focused on the content of the course and specific ways to improve, rather than concerns over partial credit. As Bowman says, “The reassessments themselves and the discussions they provoke can probe quite deeply into the relevant topics, which is a boon regardless of whether or not an individual attempt succeeds.” Specific feedback on tasks and homework helps direct student studying. Bowman notes that the descriptions of the tasks help students ask better questions, for example “I’m having trouble knowing what it means to interpret an average rate of change.”
Finally, and perhaps most importantly as this book is being written, is the flexibility of the system under exceptional circumstances. Pepperdine experienced major disruptions in fall 2018, when wildfires forced two weeks of remote classes. Pepperdine also moved online starting in March 2020, as the Covid-19 pandemic took hold. In both cases, “Having a list of tasks helped me determine what was critical—and even what was possible—to cover during this period of remote instruction… At times of crisis, having a list of standards or tasks makes it much easier to sort out what must be kept and what can be trimmed for the sake of guiding students to the essential elements of the course.” Many others reported similar benefits from SBG during the “great pivot” of March 2020: The flexibility and clarity of the system helped them make decisions.
This sort of Standards-Based Grading can work especially well in an online class in general, not just in emergency remote instruction. Quizzes, exams, or similar assessments can be posted on a learning management system, either using a “quiz” or “test” tool, or simply as PDFs. Students enter solutions or, more commonly, scan and upload work (using a phone or tablet to scan, or a third-party tool such as Gradescope). As Bowman says of online assessments, “This of course opens the possibility that students will share the new tasks with others in the class. I have chosen to largely depend on trusting students not to try to get hold of the new tasks ahead of time.” Students have repaid this trust, as often happens with SBG: The reduced stress of assessment also reduces the incentive to look for help out of bounds.
Online tools allow instructors to give detailed feedback, including annotations on PDFs. The instructor can include one column in the student gradebook for each standard, and report progress on each standard by updating those columns.
This assessment system is a classic example of “pure” Standards-Based Grading. All of the assessments in the class are aligned with clear “standards” (tasks), with built-in opportunities for formative assessment and feedback. Bowman’s three “levels” of tasks address several common concerns about SBG. By incorporating multiple skills together, modeling tasks assess big-picture ideas and test students’ ability to put together multiple skills and solve more complex problems. Auxiliary and modeling tasks both represent higher levels of Bloom’s taxonomy, ensuring that students who earn higher grades have demonstrated higher-order skills.
Thanks for reading this case study! Do you find it helpful? Was there something you especially liked? What would you add or remove? Let us know in the comments!
Thank for the variety of articles that you post: specific practical examples like this help cement the broader concept articles.
Interesting to see a case study. I thought the homework reports were especially useful.