(Un)necessary teachers’ work? Lessons from England.

by Susanne Gannon

Disembarking at Heathrow a few weeks ago, my first purchase in pounds as always was a copy of The Times to read on the train into the city. The second page headline, “CR (Creative Original): Grades on schoolwork replaced by codes” (Bennett, 2017) caught my eye. Skimming the article in my dazed jetlagged state was not ideal for a critical reading but I snapped a photo with my phone of the final paragraph:

“In 2014 the government asked teachers to tell them what created unnecessary work. Three big areas were marking, planning and data management.”

I recognise the data deluge in schooling is now overwhelming, may be driven by externally imposed system imperatives and is not always put to use to improve student learning. However, I’ve spent my professional life as a secondary English teacher, tertiary teacher educator and researcher. I could not see how “marking” and “planning” are seen as “unnecessary work” for teachers.

Planning is surely at the heart of teachers’ work. Otherwise how do we claim our status as professionals? Ideally we don’t just wing it in the classroom, nor do we follow prescriptive scripts. Systematic, responsive, syllabus-informed planning of purposeful sequences of learning and meaningful resources are what make the difference for individuals and groups of students. Well-selected and fine-grained data about student progress (not necessarily only the numerical data that is favoured by educational systems) should of course inform such planning as skilled teachers identify gaps and opportunities for extension and tailor their planning to their students’ needs and their potential.

Having high expectations and creating the conditions – through careful and ideally collaborative planning – for students to succeed and to excel are hallmarks of quality teachers. These features are characteristic of exemplary teaching in disadvantaged contexts (Lampert & Burnett, 2015; Munns, Sawyer & Cole, 2013). Careful planning need not preclude flexibility, creativity and authenticity in learning and assessment practices, but conversely may enable these qualities (Hayes, Mills & Christie, 2005; Reid, 2013). As many of these authors stress, good planning is often underpinned by a disposition of teachers to become researchers of learning within their own classrooms. Where teachers are provided some agency and capacity to gather and use data then problems are less likely to be at the low level of time consuming and potentially meaningless “data management” that is perceived as “unnecessary work” by teachers in England.

Marking is of course close to my heart as a secondary English teacher and I have spent countless hours of my life providing written feedback on student work. Whilst I have become adept at designing and using outcomes based rubrics / criteria sheets since their introduction in the mid-90s with outcomes based assessment and curriculum, I have always endeavoured to provide tailored and specific feedback to students on their texts.

This for me is “marking” as a process, and I think of it – in ideal circumstances – as sometimes like a sort of dialogue on the page between student, text and teacher, and an opening towards further dialogue. It features in formative as well as summative assessment contexts (apart from exams). Now it features in the texts in progress that are thesis chapters for my current doctoral students. In a perfect world it is diagnostic, supportive, explicit and critical in combination and students will take heed. Portfolios, peer and self-assessment processes and tools can be incorporated. As Munns et al (2013) describe, sharing assessment responsibility is an important component of the insider school. The volume and pressure of marking has always been problematic however, when short timelines for results and sheer numbers of students across multiple classes work against ideal scenarios. My research into creative writing in secondary schools (e.g. Gannon, 2014) suggests how English faculties were able to work collegially to support senior students as they developed major works in English. Marking, at best, can be rewarding, encouraging and useful for students and for teachers.

Where, then, does the aversion to marking come from for teachers in England? The article in The Times does not provide any pointers towards the government survey of 2014, but is rather an announcement of a large randomised control trial to be funded by the UK-based Education Endowment Foundation, based on a Report reviewing written feedback on student work that they commissioned and recently published (Elliot et al., 2016). The opening of the executive summary of the Report provides further detail:

[T]he 2014 Workload Challenge [UK] survey identified the frequency and extent of marking requirements as a key driver of large teaching workloads. The reform of marking policies was the highest workload-related priority for 53% of respondents. More recently, the 2016 report of the Independent Teacher Workload Review Group [UK] noted that written marking had become unnecessarily burdensome for teachers and recommended that all marking should be driven by professional judgement and ‘be meaningful, manageable and motivating’. (2016, 4)

Well, of course! What has gone wrong in England that marking is not driven by these qualities. Are there lessons for us in Australia (yet again from England) of what not to do in educational reform? Although the report acknowledges that there is very little evidence or research into written marking, they nevertheless identify some inefficient and apparently widespread practices: triple-marking, awarding grades for every piece of student work (so that the grades distract students from the feedback), too many texts required from students, marking excessive numbers of student texts, provision of low level corrections rather than requiring students to take some responsibility for corrections/ improvements, and moving on without giving students time to process and respond to feedback.

Despite the caveat in the opening section, the report is worth reading in full (though it has been criticised by local critics e.g. Didau, 2016). Secondary teachers are much more inclined to put a grade on every piece of student work, they say (2016, 9). Unsurprisingly, offering clear advice on how a student may improve their work in a particular dimension seems to be more useful than broad comments (‘Good work!’) or excessively detailed and overwhelming commentary (2016, 13). Targets or personalised and specific “success criteria” may be effective, particularly where students are involved in establishing them (2016, 20; also see Munns et al., 2013).

It is in this part of the Report that the overall logic of the newspaper article becomes apparent. Buried well down into the subsection on “Targets” is the following comment:

Writing targets that are well-matched to each student’s needs could certainly make marking more time-consuming. One strategy that may reduce the time taken to use targets would be to use codes or printed targets on labels. Research suggests that there is no difference between the effectiveness of coded or uncoded feedback, providing that pupils understand what the codes mean. However the use of generic targets may make it harder to provide precise feedback. (2016, 20).

The Times headline is therefore not quite accurate. It seems that “Grades” will not be replaced by “codes” but rather that teachers’ written comments will be replaced by codes. In another article, “Schools wanted to take part in marking without grading trial” (Ward, 2017) this is called “FLASH Marking” and is an initiative developed in house by a secondary school in northwestern England that will be rolled out to 12,500 pupils in 100 schools (EEF, 2017). The school claims that teachers will now be able to mark a class of Yr 11 exam papers in an hour. Students will receive an arrow (at, above or below expected target), and codes such as CR = “creative original ideas”, and V= “ambitious vocabulary needed.”

It seems from these news stories (and presumably EEF will put up the design protocols on their website eventually) that two different factors are being measured – one is holding back grades and the other is using codes instead of written comments. I’m curious but ambivalent, after all at university it is now mandatory to use “Grademark” software for coursework students. This enables teachers to provide generic abbreviated feedback (“codes”) but also gives us the opportunity to personalize responses, and supplement these with an extended written comment, or even an audio-recorded comment. These are highly personalised and appreciated by students.

To turn back to the English example, I wonder whether the randomized control trial design (in this case an efficacy trial that will be evaluated by Durham University) means that participating schools will not be able to improvise around the conditions of the feedback? At least, if the reduction of feedback to codes proves not to improve student results, given the need for the control (or “business as usual”) group, the damage will be limited to only half the participating schools and students. The news articles are unclear about the purpose of the study – which is described as a way to reduce teacher workload more than to improve student learning. However the EEF project description also mentions, reassuringly, that the rationale is focused on student outcomes, as “specific, actionable, skills-based feedback is more useful to students than grades” (2017). The project will follow year 10 students in senior English classes through to the end of secondary school with a report to be published in 2021. Already, I can’t wait.

References

Bennett, R. (June 17, 2017). CR (Creative original idea): grades on schoolwork replaced with codes. The Times.

Didau, D. (May 18, 2016), The Learning Spy Blog.

A marked decline? The EEF’s review of the evidence on written marking

Education Endowment Foundation (2017). Flash Marking. https://educationendowmentfoundation.org.uk/our-work/projects/flash-marking/

Elliot, V., Baird, J., Hopfenback, T., Ingram, J., Thompson, I., Usher, N., Zantout, M, Richardson, J., & Coleman, R. (2016). A Marked Improvement? A review of the evidence on written marking. Education Endowment Foundation. https://educationendowmentfoundation.org.uk/resources/-on-marking/

Gannon, S. (2014). ‘Something mysterious that we don’t understand…the beat of the human heart, the rhythm of language’: Creative writing and imaginative response in English. In B. Doecke, G.Parr & W. Saywer (Eds), Language and creativity in contemporary English classrooms (pp. 131-140). Putney: Phoenix Education.

Hayes, D., Mills, M., & Christie, P. (2005). Teachers & schooling making a difference: productive pedagogies, assessment and performance. Allen and Unwin.

Lampert, J. & Burnett, B. (Eds) (2015) Teacher Education for High Poverty Schools. Springer.

Munns, G., Sawyer, W. & Cole, B. (Eds). (2013). Exemplary Teachers of students in poverty. Routledge

Reid, J. (2013). Why Programming matters: Aporia and teacher learning in classroom practice. English in Australia. 48(3), 40-45.

Ward, H. (June 16, 2017). Schools wanted to take part in marking without grading trial. Times Education Supplement. https://www.tes.com/news/school-news/breaking-news/schools-wanted-take-part-marking-without-grading-trial

 

Dr Susanne Gannon is an Associate Professor in the School of Education and a senior researcher in the Centre for Educational Research at Western Sydney University, Australia.