Tip-toeing into Alternative Grading: A Beginner’s Journey
How grading for growth fosters more equitable outcomes in student success
Today’s guest post is by Greg Foster-Rice, an Associate Professor who teaches the history of photography at Columbia College Chicago. Like many faculty, he began exploring alternative grading practices during the spring of 2020, when classes switched to remote instruction for the pandemic. He was formally introduced to Grading for Growth when a colleague shared the book with him in 2023, upon his acceptance of the position as Associate Provost for Student Retention Initiatives, which he still holds.
Introduction: Why Were My Rubrics Failing Me?
As a guest writer, I appreciate how these guest posts expand the dialogue around alternative grading. I was first introduced to alternative grading after spending my first fifteen years as a college teacher developing ever lengthier and more detailed rubrics to manage larger and larger class sizes and clarify expectations for my students.
Despite these efficiencies, my reliance on rubrics and percentage-based grades had unintended consequences that I began to notice around 2017. First, as an art historian dedicated to learning actively from actual objects in the classroom or on field trips, traditional exams and rubrics only seemed to dampen the enthusiasm and engagement my students expressed during our activities. Secondly, rubrics caused my students to fixate on point values instead of experiences and seemed to reduce the opportunity for feedback.
I’m not quite at Alfie Kohn levels of disenchantment with rubrics and I’m still navigating my journey. In fact, I’d never even heard about the four pillars of alternative grading until two years ago when I first read David and Robert’s book. So, this post won’t be some tour-de-force of amazing alternative grading strategies and tools (love those posts!). Rather, this is a story about how some simple efforts to better support my students naturally led me towards some key tenets of alternative grading.
In this post, I’ll highlight how I started to implement formative assessments and a more feedback-oriented grading system. These strategies are part of a broader approach that includes my use of SCALE-UP (Student-Centered Activities for Large-Enrollment Undergraduate Programs) and a flipped classroom for in-class engagement—topics for another day!
Context: Teaching the Humanities at an Accessible Arts & Communication College
If you’re in a college with a high acceptance rate and a desire to improve those students’ outcomes, we likely share similar challenges. Like many, I believe our institutions can thrive by moving beyond a sole focus on “college readiness” to emphasize robust student support by changing our policies, practices, and culture.
My institution, Columbia College Chicago, is a four-year private college focusing on preparing students for careers in the arts and communications industries. Our 5,400 undergraduate students pursue majors like filmmaking, illustration, animation, design, photography, and theater. With a roughly 90% average acceptance rate, 49% of our undergraduates are Pell recipients, 55% are BIPOC, 28% are Hispanic, and 51% are first-generation students.
Our mission to provide accessible, affordable arts education to underserved populations, however, offers some distinctive challenges. For example, a newly energized Office of Institutional Effectiveness has identified disparities in persistence, retention, and graduation rates among first-generation, low-income, and BIPOC students. Alongside strategies to improve enrollment, orientation, student life, and academic support, we are therefore revising academic structures, curricula, and pedagogy. And this is where alternative assessments play a role!
In this context, I teach a two semester History of Photography course to visual arts students aspiring to become professional photographers. My courses are large by our college’s standards, with 60+ students, compared to the college-wide average class size of fewer than 18 students. For learners who prefer creating images to reading, writing, and attending extensive lectures, my courses can feel like dreaded requirements.
My challenge has been to shift my students’ perspective starting on Day One with hands-on, active learning opportunities. By 2020, however, I had fallen into a teaching rut, relying on old lectures and not providing enough opportunities for students to actively engage with the skills and knowledge I hoped to impart, and my completion rates were averaging around 86%.1
The Pandemic and My Journey with Alternative Grading
Everything changed with the pandemic. Before 2020, many of us at the college taught traditionally, centering on the faculty (lecture for survey courses, formal critique for studio classes) rather than students2. Student completion and retention rates were already relatively low and not meeting the needs of our increasingly diverse, first-generation population for whom these teaching styles merely reinforced authoritarian social structures. COVID-19 amplified these challenges, spurring dramatic changes.
In Spring and Summer 2020, I joined a faculty taskforce to leverage technologies like Zoom for class meetings and breakout rooms, Canvas Studio for recorded lectures, and the Canvas LMS to address remote learning challenges. These tools frequently improved outcomes: quiet students felt empowered to type responses in chats, annotate slides, take screenshots, and participate in breakout rooms. Teaching in multiple modalities provided valuable insights and helped the college’s faculty deepen and clarify how better to use technology to enhance student agency in the classroom3.
Individually, I began experimenting with alternative grading by eliminating summative assessments and prioritizing feedback. In March 2020, as we shifted to remote learning, I canceled my midterm (15%) and final exam (20%), replacing them with discussion forums (short-form writing, worth 5% each) and handwritten notebooks submitted as PDFs or videos. I never went back. Here’s an example of my current assessments and weights.
I’ve kept four short open-note quizzes on key texts that shaped photography’s historiography. These are my most traditional holdovers, but I’m working on updating them based on insights from the Discussion Forums and Notebooks, which I’ll focus on today.
Unlike a summative exam, the weekly format of the Discussion Forums more successfully engages experiential learning by asking them to connect information from the flipped video lectures/readings with the experience of actual photographic objects we explored that week in class. In the case below, that object was Life magazine, vintage copies of which we handled in class that week, and which they could now explore from home via the google archive.
I simplified my rubric to emphasize completion (80%) and meaningful comments on peers’ posts (20%), creating two sub-rubrics within the overall structure. The peer feedback was critical for building a collaborative atmosphere and engaging students in critical yet empathic reading, which we spent time in class discussing. My feedback connected to the assignment’s guiding questions, evaluating how well students addressed prompts and used the provided resources (which in this example supported the use of primary source, digital archives). My rubrics still contain point values since I created them before I learned you can reverse engineer Canvas into something like an EMRN rubric (a project for this summer!). My “hybrid” approach shows how even traditional gradebooks can incorporate somewhat alternative grading strategies.
My TAs and I managed the assessment of 60+ assignments each week via liberal use of the comment library in Canvas to structure more individualized pieces of feedback. This includes always naming the student (“[Student Name], thanks for your submission…we really appreciated…we suggest…”), a practice that one of my TAs introduced and that helps students feel seen, contributing to higher read rates.
My second example, Notebook assessments, posed slightly different challenges of assessment compared to grading the kinds of “Remember/Understand” goals of a conventional exam. My instructions for the notebooks offer explicit guidance on how not to use the notebook as a merely stenographic dictation of the video lectures and readings, but rather I instruct students to use whatever method suits them – sketch-notes, diagrams, Cornell notes, collages, etc. – to analyze, evaluate, and reflect upon the material. I include links to a variety of note-taking strategies and examples of prior student work (with permission) that have both exceeded and met my standards. As I explain, I want to foster good note-taking skills and hope the notebook may serve as inspiration for work in their studio photography classes.
From the first time I accepted notebooks as a replacement for exams in 2020, this assignment has exceeded all my expectations in terms of allowing me to get a better sense of my students’ engagement with the material. Their reflections on the video lectures and readings, prompted by a few open-ended questions, have shown unprecedented depth of evaluation compared to essay responses on exams.
To simplify my grading, I used a rubric pared down to three criteria (coverage, organization, depth of engagement) and assessed each category on a scale of Exceeds, Meets, Not Yet, and Not Submitted. I had to explain to students how the terms on this scale were more important than the ponts I was using as placeholders in Canvas. While it has largely worked, I plan on simplifying my three criteria rubric to a single, general rubric that I hope will reduce my students’ desire to “satisfy the rubric” and also encourage more resubmissions. What I am seeing is growth between the first Notebook Check in Week 7 and the second Notebook Check in Week 15, so I also plan on increasing the frequency of the checks to Weeks 4, 9, and 15 in order to encourage more opportunities for longitudinal improvement.
For the Notebook assessments, I’m probably most proud of how we provide the feedback outside the rubric. In order to submit their handwritten notebooks on the Canvas platform, my students record short videos flipping through their pages and reflecting on key takeaways, which offer opportunities for self-assessment. Around 2022 we switched from Canvas Studio to Panopto, which allowed me to embed my feedback directly into their videos, as caption overlays that they see in the returned submission. This not only makes the rubrics secondary to the more fluid, personalized comments embedded in the videos, but it also mimics the way social media users create “response videos” with overlays of text etc. That, in turn, means that my students are more likely to read my feedback and respond to it!
These experiences underscore the importance of how we deliver feedback to a generation born into a different media ecosystem. This semester, for example, I’m going to experiment with video feedback to enhance rapport and engagement. We also leaned heavily into Canvas’ general feedback, which created threaded conversations between us as graders and students, thereby supporting timely resubmissions and fostering stronger connections between feedback and student progress.
Challenges Along the Way
My experiments were not without their hiccups. Since roughly 2015, I had what I felt was a progressive "No Late Work / Always Allow Resubmissions on Timely Work" policy. Though I always intended for it to be flexible in emergencies, the foregrounding of “No Late Work” as the lead-in of the policy likely discouraged communication about absences and missing work among some students. That came to a head in 2022. Although the worst days of the pandemic were subsiding, chronic absenteeism and disengagement fostered by the social isolation of remote learning during High School presented new challenges with this cohort of students. Many of them weren’t talking to me about their absences, missing assignments, or opportunities to resubmit assignments. So, I redrafted my “No Late Work/Always Allow Resubmissions on Timely Work” policy as “Resubmissions are always encouraged to improve your success / Late work allowed in emergency circumstances - let me know if you have questions.” In this regard, I’ve been heavily influenced by recent research on communication science within student support roles about “how you say it matters.”
I also began to recognize the flaws of numerical grading, especially the harsh impact of zeros—often due to emergencies—on otherwise high-performing students in post-pandemic circumstances. Before discovering Grading for Growth and this blog’s community, I was drawn to Douglas Reeves’ “The Case Against Zero.” Reeves argues that zeros distort percentage-based systems, disproportionately affect vulnerable students, make recovery difficult, and demoralize students, discouraging persistence.
Reeves’ insights offered a quick solution to some issues with numerical grading’s validity and fairness. So, I adopted a lowest grade threshold of 50/100 for missing assessments, which helped many students stay on track and more closely aligned with their 4.0 GPA scale. While not perfect, this system exposed the statistical and explanatory weaknesses of traditional numerical grading. It also allowed me to implement a practical alternative within Canvas while working toward a true EMRN rubric for the next academic year.
Initial Outcomes of Alternative Approaches and Next Steps
In Spring 2020, despite many of our students facing dire circumstances, 100% of my students completed my class—a first for me compared to pre-pandemic rates almost 15 points lower! While multiple factors contributed to this success, alternative assessments seemed pivotal. Many of those students later achieved success in graduate school or professional photography, and they also reported a greater sense of camaraderie with their peers via collaborating with them more regularly, reinforcing that changes to teaching and assessment can significantly impact students’ self-perception and outcomes.
Shifting the focus from rubrics to feedback has made grading more rewarding for me and my teaching assistants. Improved feedback mechanisms and resubmission policies foster more of a growth mindset in students, and digital tools like Canvas and Panopto make alternative grading manageable and personable. A specific area for improvement remains in helping my TAs align with these new expectations, as some students noted in their course evaluations that the TAs provided less feedback than I did when I graded their work. But students appreciate the overall emphasis on feedback and opportunities for revision.
While I’ve not yet repeated the 100% completion rate of Spring 2020, my average course completion rate rose from 86% pre-2020 to 93% post-2020. And I’ve seen especially notable gains for first-generation students (77% to 100%) and Pell recipients (79% to 97%) in my History of Photography 1839-1940 class over the past three semesters. During that time, students in that course also have an average persistence rate of 91% to the next term. My suspicion is that my practices help students feel seen as learners, rather than merely grade earners. This is something I’d like to continue improving upon, knowing that a variety of complex factors outside my course and alternative grading are at play with regards to persistence. Building off the self-assessment this blog post has provided me, this semester I plan on holding some brown-bag lunches on alternative grading to deepen our campus-wide dialogue on the subject. I also hope to connect alternative grading with other strategic initiatives like curricular redesign and directing students to appropriate academic and well-being support services. And I plan on leaning further into the four pillars of alternative grading in my own classes.
Overall, my outcomes affirm that adopting student-ready practices, including alternative grading, can enhance persistence and success in the Humanities. I appreciate that by rethinking grading as one among many tools, we can help more students achieve their goals and succeed as college graduates.
What are your strategies, stories, and opportunities to improve student success through alternative grading?
By completion rate, I refer to the rate of students who complete the class with a passing grade, which is essentially the opposite of the DFW rate as calculated by my college’s office of Institutional Effectiveness. That said, majors make up the majority of my students and they require a C or above to pass.
If you’re unaware of “formal critique” in studio classes, it is a process whereby the faculty member leads, and frequently dominates, a critique of student work on the wall. Criticism is, of course, a form of feedback, but the format of “formal critique,” which oftentimes focuses on formal (visual, technical) skills has come under significant reconsideration lately. Perhaps the best example of alternative critique is the Liz Lerman Critical Response Process.
In fact, I retained Zoom for my in-person classes when I returned to on-campus teaching in 2021. It allows students to use those same functions in the classroom, ensuring that everyone can participate.