Feedback, revisions, and the writing process
Tracking growth across drafts in an undergraduate research course
Today’s guest post is from Jane Wageman. Jane is currently a resident scholar at the Collegeville Institute in Minnesota. She holds an MFA in fiction writing from Bowling Green State University, where she taught courses through the University Writing and Creative Writing Programs. She previously taught high school English for eight years. She writes about teaching and writing at the Substack Quick Bright Things.
When I was a new teacher—twenty-two years old, teaching high school English and studying for a masters in education at the University of Portland—my professors encouraged my classmates and me to think of ourselves as researchers in the classroom. It’s something that has stuck with me in the years since, this spirit of inquiry when it comes to teaching practices—a desire to read new studies, reflect on previous practices, and try different approaches at the start of another year.
And so it seemed only natural to make the connection explicit last spring, when I found myself teaching WRIT 1120, a seminar in research writing at Bowling Green State University (BGSU). I’d been reading about alternative grading practices for a couple of years, and gradually altering my approaches to grading—marking formative assessments for completion, allowing unlimited revisions on summative assessments—but I was curious about contract grading, and wanted to see how removing letter grades from the final drafts of writing assignments would affect students’ relationship to the writing process. Students would still receive extensive feedback on these drafts, but their final grade in the course would depend upon their completion of the four major writing assignments, their attendance of at least three one-on-one conferences to discuss their research project, and their engagement in the revision process between their initial and polished drafts, among other criteria (more on this below.) I also wanted to invite students to see the course’s grading system as a model of the research process itself: one that began with a meaningful question, the answer to which would shape how I taught future classes.
In the post below, I’ll share my experience with contract grading in WRIT 1120, focusing on what I found to be one of the more challenging parts of removing grades from a writing class: tracking student progress over multiple drafts and revisions. I’ll provide examples of a couple systems I tried—namely, two different checklists—before offering thoughts on what I’d do differently in a subsequent course.
The course
I was a graduate student when I taught WRIT 1120, which is a required undergraduate course at BGSU, a public research university in northern Ohio. My class had twenty-six students, most of whom were first-years, though it included sophomores and juniors as well.
The course was organized around a single research paper, with a series of four assignments: (1) an initial project description, in which students articulate a research question and describe their preliminary reading on the topic, (2) an annotated bibliography and research proposal, (3) the research paper itself, and (4) an e-portfolio, in which students reflect back on the semester and offer evidence of their growth in relation to the course goals.
At the start of the semester, I introduced students to the concept of the research process via the topic of grading. We read a few articles on alternative grading, and I modeled a research question based on our reading: What is the most effective way to grade student writing and why? We used this as a segue to look at the grading contract on the syllabus and discuss how grades would function within the course. One of the categories was “Revision”: To earn a grade of “A”, students would be required to demonstrate “clear, substantive revision from rough to polished drafts with all projects.” One of my hopes, I explained to students, was that by fixating less on individual assignment grades, they would focus more on growing through the process of writing—particularly the revisions from one draft to the next.
I told students that we would be returning to the topic of alternative grading throughout the course: As students developed their own research projects, I would provide them with writing samples based around my research question, and we’d also use the topic as a class to practice certain research skills (locating secondary sources on library databases, writing interview questions for primary research, etc.) before students applied these skills to their own projects.
As we workshopped my question, and as I continued teaching the course, several smaller questions developed. Among them was how best to track student revisions across drafts. I wanted a way, not just to see these revisions, but to read them as indicators of the students’ growth: Did the revisions demonstrate improvement with specific research and writing skills from the previous draft? Had they strengthened aspects of the paper’s content or structure? Did they, collectively, form a better paper than the previous draft? Did they offer evidence that the student had engaged with feedback and the revision process?
Before, a graded rubric had served as shorthand for me to compare drafts to one another—a way to direct my attention to parts of the draft that had needed revision, but also a way to translate those revisions back to the question of student growth. Now, however, that shorthand had been removed. I considered including a rubric, without the grade, but worried that this would appear no different to students than typical grades. I was reluctant to use something like Track Changes in Word or version history in Google Docs, since much of how I discuss the writing process with students is messy: working across multiple documents, moving between print and the screen, starting over on a blank page. I didn’t want them to feel encumbered by having to make their process linear and legible on a single document. I continued to use, as I had in the past, the website Draftable to compare drafts: It highlights the passages that have changed between two drafts, which is helpful in indicating the volume of changes and the location of edits. But for all of these—Draftable, Track Changes, version history—the substance of the changes is still in question.
When students and I discussed the grading contract at the end of the semester, part of our conversation would be about the growth from one draft to the next. I wanted to be able to recall how the student had revised the initial draft to the polished one—what growth the draft had demonstrated and, by extension, the student.
Tracking student progress: digital and printed checklists
For the first assignment of the semester, the initial project description, I simply commented on two separate assignments that students submitted to Canvas: an initial draft and a polished draft. I ended up wasting time moving between the two assignments, checking my comments on the initial draft against my ones on the more recent one.
So for the second assignment, the annotated bibliography and research proposal, I switched to a color-coded checklist that I could reference during my conference with students and return to when I provided feedback on their polished draft. The checklist included a key at the top and basic requirements from the assignment listed below. For each student, I changed the color of the item assignment requirements so that students could see if the item had been “met (no edits needed),” “met (some edits suggested),” “partially met,” or “not met.” Below each item, I wrote specific comments. I then saved the file in Word and uploaded it to the assignment on Canvas so that the student could see it. I did the same for the second half of the assignment (the research proposal, which had a different checklist) and uploaded it as a separate file. I also left some marginal comments on the document via Canvas.
The benefit was that the student and I could clearly see these comments and items when we were conferencing. However, the multiple files and the electronic format were cumbersome: I found myself toggling between multiple windows on my laptop during the conference itself. Later, when it came time for me to read the student’s revised draft, I looked at the checklist, updated my comments, saved the file on Word, and reuploaded it to Canvas. The process was more involved than I would have liked, and I began to doubt that students were going through the trouble of downloading this checklist in the first place. They may have looked at the checklist when I showed it to them during conferences, but had they when they revised? Would they for any subsequent revisions? How helpful was this as a means of ongoing feedback? Moreover, I had not noted the extent of students’ revisions on the checklist, so however helpful it was for the individual assignment, it wouldn’t provide useful documentation of “substantial revision” when the student and I discussed the grading contract at the end of the semester.
So for the third assignment, the research paper itself, I switched to a hard copy checklist that I could mark and write handwritten comments next to each point. This was far easier to look at during the conference with students about their initial draft, and also easier for me to reference when they turned in their polished draft—I didn’t have to find the old assignment on Canvas, download the checklist file, and move between windows. When I updated the checklist, I left my old comments in place and highlighted the items that still needed work, adding a note at the top indicating the substance of the student’s overall revisions, which we could then discuss in the final conference.
When we met for that conference, to discuss the final research paper and the contract, we had everything printed: contract, research paper drafts, checklist. We looked at the changes that had been made and were able to talk about where the paper might go if the student had time to develop it further—in fact, if there were items still missing on the checklist that would be fruitful to revise, I allowed the student to resubmit by the end of the week.
Moving forward: incorporating student reflection
Throughout the semester, I witnessed students engaging in the revision process—during in-class activities and peer-editing, during discussions and forum posts where students reflected on their own writing and research, and during my own conversations with them. At times, the lack of grades seemed to free up these conversations, allowing us to focus more on the writing at hand, with an eye towards its further development. The printed checklists proved helpful, both for these conversations and during the final days of class, when students had “studio time” to work on final edits to their papers, many of them referencing the checklists as they made revisions.
But the lists were more reductive than I would have liked, suggesting that students simply needed to fix a discrete number of “problems” with the paper and it would be “done.” Many students tended to focus on the more concrete items on the list, rather than the content-related items that required more critical thinking. The checklists also didn’t give students a sense of the paper’s growth as a whole, from draft to draft: Had the changes they made, collectively, improved the paper?
While tracking student progress is a task for the instructor, it is also, in the end, important for students as well. Moreover, I want students to see instructor feedback as part of an ongoing conversation, particularly in a semester-long research project, rather than a prescriptive list of changes to be made.
And so, if I were teaching this course again, I would keep some kind of checklist—since research papers and annotated bibliographies have many components, and it is helpful for students to have a way to keep track of these parts. But I would reformat it to prioritize the assignment as a whole and incorporate student reflection—perhaps something like this. In this revised form, after completing an initial draft, students would answer a few reflection questions about the content of the assignment before addressing the checklist items below. The instructor would read the student’s reflection alongside the initial draft and add their own feedback, before conferencing with the student. At the end of the conference, in conversation with the instructor, the student would determine which revisions were most important to focus on for the polished draft. The entire checklist, resubmitted with the polished draft, would help document not only what the student planned to revise but why they intended to make these revisions.
The goals, with this revised form, are to give students a larger role in the revising process, situate the smaller components of the assignment within the whole, and keep a short record of the drafting process (initial draft, conference, revisions, polished draft) in a single document that can be referenced easily by both instructor and student. It also makes clearer the relationship between revision and growth, by asking the student to connect their revision choices to the growth of the paper and, by extension, their own growth as writers.
Want to submit your own guest post for Grading For Growth? Just fill out this form or click the button below. We’ll get back to you with more information in a few weeks.
Thank you for this post! I also ask students to revise writing, in an upper-level STEM discipline, and I have them do something closer to the “response to reviewers” that is typical for publications, and also have these challenges of working across multiple versions or documents or platforms.