7 Comments

This post resonates with my experience and that at my secondary school. I have been iterating for years to grow and perfect the process to align with my objectives and those of my students. As other teachers have seen the results (!!) and asked to model mine they have found it incredibly difficult, bailing and blaming “the system“ rather than acknowledging the inherent challenges or their lack of willingness to self examine their own practices.

Expand full comment
author

This is the kind of experience that inspired this post! I hear a lot of people who think that alternative grading must be done "just so", and who don't feel like they have any power to change or customize it -- of course we're trying to send the opposite message on this blog! I partly blame some of the big names in K-12 who are indeed selling SBG wholesale to schools and are quite rigid about its implementation.

Expand full comment

As an educator, I've explored alternative grading methods and their practical implications. While I understand there is no perfect solution, I've encountered the term 'ungrading,' which seems confusing. I like alternative grading better. Those who are ungrading are still grading.

I'm seeking clarification on the following points, which most people do not want to entertain.

1) I use a points-based grading system. However, I often find that people assume I don't provide feedback. In reality, I give triple feedback, even in a class of 80. This feedback includes what was wrong, what is correct, and where they can find an equivalent example or theory to what they missed. I also provided an equivalent problem from the textbook. People may say students only look at points, but my research says otherwise. Also, the student translates it into points even if one says "not there" in alternative grading.

2) I tried multiple-chance testing (a subset of SBG), and it worked out well (https://www.ijee.ie/latestissues/Vol40-2/09_ijee4434.pdf). But it took time off the class meetings. I see people giving many chances for retakes in class. Where do they find time to do instruction? They provide them during office hours—are they proctored? How do they even have a quiz ready for retake? Are the same questions repeatedly given in the quiz? Wouldn't the student know then what will be asked?

3) How do we check for cumulative knowledge or synthesis of knowledge or check for the interleaving effect of not knowing what standard a question belongs to? Why is the final exam not given?

4) Deadlines are critical. Forget the argument of behavior vs. learning—it gets old after a while. Some ungrading advocates give grades based on students' own assessments. An extrovert can convince one to provide them with a better grade than an introvert. Are we grading personality or learning? Students must do things by a deadline if future topics depend on previous topics. This prerequisite knowledge is especially true of STEM classes.

5) LMSs do not help much when using SBG. Students get confused about their current grades, and so do I when keeping track of them. I even made a foolproof Excel sheet for students. They still complained, and that was just when I was using multiple-chance testing. Students deserve to know where they stand in an uncomplicated way – we want that in our own jobs.

6) If the latest score is used to meet a standard, many students who should not be will continue to procrastinate. The only thing that helps them meet deadlines is that in an LMS, the deadline shows up on their calendar; the "open until" date does not.

7) Is it equitable that only students who have time, are not working, and are taking fewer classes can take higher advantage of repeated testing?

Expand full comment
author
May 6·edited May 6Author

Hi Autar, we address almost all of these questions our book, so I highly recommend starting there: https://www.routledge.com/Grading-for-Growth-A-Guide-to-Alternative-Grading-Practices-that-Promote-Authentic-Learning-and-Student-Engagement-in-Higher-Education/Clark-Talbert/p/book/9781642673814

We also address this repeatedly through our blog posts and guest posts. So do many others on their own blogs, books, articles, and social media. So as you can see, I think people engage with them quite a lot!

You also seem to be repeating talking points that have been thoroughly debunked (e.g. that retakes ask the same questions over and over, or that there are no final exams). A quick search of just this blog will find you many posts over these topics, as well as LMSs, deadlines, and many others. I strongly encourage you to take time to read through our book and blog as a good starting point. You'll find your questions are thoroughly answered.

Expand full comment

I know that all these many questions can be overwhelming. I read the blog quite religiously and have seen the YouTube videos one too many times. They leave me with some questions answered but bring new ones in.

Let me ask one question (with parts as it is the same topic). I am not trolling, as I know how that felt when I used to teach flipped classes—I just want to have categorical answers.

Let's suppose you have 15 standards. How many times would you allow them to be retested in Standard 1? One can use class time at least during what you call "retake sessions," "final exam time sessions," and "office hours." How do you have a quiz already made for the student when there are 15 standards and, say, 40 students? How do you make so many different questions for the same standard? If they do not pass, do you let them take it home - that is what I gathered from your co-author in one of the webinars at the National Institute on Scientific Teaching a few years ago.

I want to become an adopter of SBG as I saw so many advantages just with multiple chance testing!

Expand full comment
author

Different people use different approaches -- there is no single "right" way. But I can give you can example of what I have done successfully in 30-person intro classes.

I schedule weekly quizzes. Ahead of the semester, I make a plan for which standards will be covered on each quiz. That includes repeats of older standards. I try to include each standard 2-3 times during the semester, and once on the final (3x total). I require students to successfully complete standards twice to earn full credit.

Week 1: L.1

Week 2: L.2, L.3, new attempt on L.1

Week 3: L.4, L.5, new attempt on L.2 & L.3

etc.

Sometimes we have an "exam" which is really just a big quiz with more of a focus on reassessments. So for example:

Week 5, new attempts on L.1, L.2, L.3, L.4, L.5, and L.6.

I'm always willing to edit this plan based on the pace of topics in class, if many students need reassessment on a specific standard, etc. But overall this plan helps me pre-write quizzes and reassessments. Everybody, students included, knows what's coming up, and the plan helps me spread out grading workload in a predictable way.

As you can see, I do *not* offer customized reassessments for each student. They know ahead of time when they can attempt each standard, and I provide a record where they can track what they need to do. Then they do -- or don't -- attempt what they need on each individual quiz.

Others do different things. People who offer customized reassessments in office hours often pre-write 3 or 4 different questions that address each topic, and have those ready in a stack. I've tried that and it works, but is harder than what I've described here. Different types of assessments might benefit from revision, rather than new attempts.

I sometimes use take-home assessments, sometimes in-class. It depends on the students and class. There's no one right way.

Expand full comment

That is the kind of answers the community is looking for. Thanks.

When I used multiple-chance testing, I had 8 standards. Retest was given 2-3 weeks after each of the three mid-term tests.

All students were allowed to retake a test if they wanted. However, their final score would be capped at 90%. This was done to encourage students to focus on learning new topics instead of just striving for a few extra points. To prevent students from retaking a test to gain access to the questions for future preparation, the retake tests were made available to all students on the LMS after they had been administered.

The retake tests for the first-chance unit tests were conducted the following way: for example, unit test 2 covered two topics, Topics 4 and 5. During a 75-minute class session, students took the first-chance test for both topics. The test was then returned to students with grades and proficiency levels for each topic. Two weeks later, during a regular class session, students took the retest for both topics, with 25 minutes allotted for each topic. Students could choose to retake the test on none, one, or both topics, but the start and end time for each topic test was fixed. To maintain academic integrity, if any student left a retake test early, a student coming in late was not allowed to take the test. However, this rule never needed to be enforced.

Additionally, the retake tests consisted of new questions, and they were not algorithmic equivalents of the first-chance test questions. Each student's topic score was updated by one-half of the difference between the retake and the original topic score. This adjustment was only applied if the student's retake score exceeded the original test score.

Although the final exam was a separate component of the grading scheme, it was also used as a proxy for a second retake of all topics. Adjustments were made to the topic scores similar to the retake tests. Points for each topic corresponded to the questions asked on the final exam. The final exam was kept as a separate component of the grading because students must recognize the integrated connections between the topics and course prerequisites, and it helps improve long-term retention. The final exam is also critical because my course is a prerequisite for several other courses in the Mechanical Engineering curriculum.

Expand full comment