Maybe you heard the news. Harvard and MIT, in conjunction with EdX, a non-profit enterprise, have just introduced a software program that grades essays. I admit, as an English teacher bogged down on a regular basis with hundreds of papers, a little part of me finds this prospect dreamy: an actual computer program that could actually magically lighten that actual load in my briefcase (or erase the docs off my Google Drive?) Could this be true?
Indeed, the program is being touted as a time saving method for professors. However, another group of educators, Professionals Against Machine Scoring of Student Essays in High Stakes Assessment, has arisen to fight the use of such software. In a statement put out by the group, they assert what surely many of us who grade papers know intuitively: “Computers cannot ‘read.’ They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organization, clarity, and veracity, among others.”
But there’s something else missing from this debate. Or, more accurately, someone missing from this debate: our students.
The “Distinguished” level of the third domain of the Danielson Rubric, “Using Assessment in Instruction” spells it out clearly: “Assessment is fully integrated into instruction, through extensive use of formative assessment. Students appear to be aware of, and there is some evidence that they have contributed to, the assessment criteria. Questions and assessments are used regularly to diagnose evidence of learning by individual students. A variety of forms of feedback, from both teacher and peers, is accurate and specific and advances learning. Students self-assess and monitor their own progress. The teacher successfully differentiates instruction to address individual student’s misunderstandings.”
Boiled down, what is Danielson asking of us? To build feedback loops into our classrooms, where assessment acts as an ongoing dialogue between teachers and students, and sometimes among the students themselves. In other words, those burdensome papers that sometimes make Sunday evenings feel like marathon grading sessions are not simply tasks to be dealt with, but opportunities for us to assess what our students are truly understanding and to tailor our instruction accordingly. For example, if the majority of my students do not understand how to include a counterargument in an essay, guess what I will be addressing in class the next day? If I see that certain students have nailed the skill of integrating evidence smoothly into a paragraph, perhaps the next day I will pair them with the students less skilled in this area for a mini-lesson.
As we continue to get our collective heads around both the Common Core Standards and the Danielson Rubric, it’s important to remember that the hard work of teaching writing is also quite messy. As much as we’d like to streamline it, via a machine or some other method, the truth is, there’s no substitute for the brainstorming sessions, the drafting, the revising, the editing, and the formal and informal assessments that are the hallmark of a distinguished teacher’s classroom. Every time we read student work, we’re given a chance to reflect on our own teaching and to ask ourselves how well we’ve embedded literacy practice into our classrooms. Every time we ask students to assess themselves, we’re provided with terrific insight into their ability to monitor their own learning. We’re more able to accurately see how aligned our instruction is to the Common Core. Mostly, we’re erasing that invisible line that exists between teacher and student, a necessary step that makes us much more than graders—indeed, that makes us human.
To learn more, please visit our team-led VLC dedicated to the Danielson Framework.