Category Archives: Education Policy and Politics

Mathematics education in Australia: New decade, new opportunities?

Post by Associate Professor Catherine Attard

As we prepare for a new school year in a new decade, it is an apt time to reflect on the last ten years of mathematics education and consider the next ten. What, if anything, will change in our classrooms and school systems? Or will it be a case of the more things change, the more they stay the same?

Current challenges in mathematics education

Consider the current context of mathematics education in Australia and beyond. Over the past decade we have seen an apparent decline in senior secondary students’ enrolments in high level mathematics courses. We have also had continued challenges with students disengaging with mathematics and failing to see the relevance of mathematics. The last decade has also experienced a significant increase in the number of out of field teachers in secondary mathematics classrooms and we do not fully understand the potential impact of this on student learning.

According to media reporting of the 2018 Programme for International Asssessment (PISA) results , Australian students’ mathematical literacy results have declined and we are being outperformed by countries such as China, Singapore, Estonia, and others. Yet, take a closer look at the results and you will notice that there are no significant differences or trends since the last PISA testing. Nothing has really changed, but is that good enough?

Students in Australia and internationally continue to experience disengagement with mathematics as early as the primary school years. Mathematics is still viewed by many as a subject reserved for the ‘smart’ kids, and it still remains socially acceptable to openly claim to be “just not good at maths” or “not a maths person”. Despite research into student engagement identifying the elements required to address these issues, along with an abundance of fine-grained research into how students best learn specific aspects of mathematics and ways to harness the affordances of digital technologies, it appears we still face challenges. These challenges relating to student attitudes, their engagement, and a reduced desire to continue the study of mathematics beyond the compulsory years, often result in lower academic achievement. What can we, as leaders and teachers, do differently in this new decade to ensure positive change? Can we make changes that will ultimately result in an upward trend and with engaged students who value mathematics?

The tensions for teachers

Leaders and teachers experience tensions in their day to day teaching of mathematics. Should we teach to a test, or should we teach according to the specific and unique needs of our students? The levels of accountability due to high stakes testing such as NAPLAN and PISA have, in many cases, informed teaching practice due to the linking of results with school reviews. While NAPLAN was originally intended to be a diagnostic test, it has, according to Reid (2019), “moved from being a mechanism to check the pulse of one part of the education system, to being the reason that schools exist” (p.41). A further effect of standardised testing is the use of text books and other resources designed to prepare students for those tests rather than developing conceptual understanding using a broad range of pedagogies and rich tasks.

Standardisation vs. Future-focused education

In his recent publication Changing Australian Education, Reid points out that on the flip side of this educational debate is what is often referred to as ‘21st-century learning’. This future-focused approach includes strategies that appear to conflict with the standardisation approach that often results from high stakes testing. Student-centred strategies such as inquiry and project-based learning, flexible student groupings and the inclusion of general capabilities all espouse future-focused education, requiring students to be flexible, adaptable, agile and collaborative (Reid, 2019). All of these strategies are already embedded within our current mathematics curriculum, so while we may be conflicted in terms of teaching to the test or taking a more student-centred approach, we have, through our mandated curriculum, license to plan and teach in ways that are more meaningful for our students, and in time, change the landscape of mathematics education in this country.

What does this mean for mathematics for schools and classrooms?

One of the effects of a standardised approach is the ‘silo effect’ on how the mathematics curriculum is delivered in classrooms. Topics taught in isolation for the purpose of reporting and testing often result in students struggling to apply mathematics in novel situations and difficulties in making connections within and across mathematics topics. This then leads to disengaged students and a perception that mathematics is a practice that is restricted to the classroom rather than mathematics as a way of understanding and making sense of the world we live in.

The following is a brief list of suggestions for leaders and teachers that may help combat the issues discussed above, and more importantly, lead to positive changes to student perceptions and performance in mathematics:

Scope and Sequence

A school’s scope and sequence document should reflect the big ideas in mathematics as well as the relationships across and within the curriculum strands. It should also be flexible to allow teachers the opportunity to spend more or less time on content in alignment with the needs of their particular students. The scope and sequence should also feature the processes of mathematics concurrently with the content. That is, the Australian Curriculum Proficiencies or the Working Mathematically strand in NSW.

Teachers should be also be given the opportunity to exercise their professional judgement. If schools subscribe to commercial programs that remove this judgement, individual student needs cannot be met. No program can replace the pedagogical relationships between a teacher and his or her students. These relationships are an essential element of teaching that directly influences student engagement and learning (Attard, 2014).

Pedagogy

Our curriculum consists of two distinct areas: mathematical content and mathematical processes. We need to teach content via the processes. That is, we should be teaching through a problem-solving approach rather than teaching content in isolation. This reflects a ‘just in time’ approach as opposed to a ‘just in case’ approach. Teaching via problem-solving provides a context and a need to learn specific content in a way that has meaning for students. Teaching through a ‘just in case’ approach (teaching content in isolation) separates the mathematics from the numeracy and does not promote thinking and reasoning.

Using a range of resources include concrete and digital through primary and secondary schooling is also important if we are to improve students’ conceptual understanding in mathematics. Consider resources that can be used flexibly and also consider how the use of digital technology can not only enhance mathematical understanding by providing alternate and dynamic representations, it can also improve the teacher/student relationship by providing alternate avenues of communication, assessment and feedback.

Consider emphasising the ‘M’ in STEM and highlighting numeracy across the broader curriculum. While funds are still being heavily invested into STEM initiatives we must take the opportunity to ensure mathematics, which is the language of STEM, is prioritised. Opportunities for students to use mathematics in a range of contexts are critical if we want them to understand the relevance and make connections.

It takes a village

The phrase “it takes a village to raise a child” applies to mathematics education and improving future mathematics outcomes. Mathematics and numeracy is everyone’s business. Whether you are a primary teacher, a secondary teacher (of a discipline other than mathematics), a parent or carer, a politician, a celebrity, or anyone else with influence on children, we are all responsible for improving mathematics education. So let’s pause, take a deep breath, and think about what we can do differently to improve mathematics for our students as we begin this new decade.

About the author

Catherine Attard is an Associate Professor of Mathematics Education and Deputy Director of the Centre for Educational Research at Western Sydney University. Her research interests include student engagement with mathematics, mathematics pedagogy, financial literacy education and the use of digital technologies in mathematics classrooms.

Contact:                                                                                              c.attard@westernsydney.edu.au                                                                              https://engagingmaths.com

References

Attard, C. (2014). “I don’t like it, I don’t love it, but I do it and I don’t mind”: Introducing a framework for engagement with mathematics. Curriculum Perspectives, 34(3), 1-14.

Reid, A. (2019). Changing Australian Education. Sydney: Allen & Unwin.

 

 

Conceptual analysis for decolonising Australia’s learning futures: Implications for education

Professor Michael (מיכאל) Singh (ਸਿੰਘ)

Bionote

A postmonolingual teacher-researcher, Professor Singh’s work focuses on extending and deepening teacher education students’ literacy skills through using their full repertoire of languages-and-knowledge; equipping them to meet the demands of teaching Australia’s multilingual students, and increasing their confidence in the added value postmonolingual skills provide graduating teachers. He enjoys watching movies that make postmonolingual practices visible, such Bastille Day and The Great Wall (长城), and the xenolinguistics of Arrived. Having an interest in polyglot programming he is able to write, incorrectly in 11 languages, “I am not a terrorist.”

Re: Conceptualising learning futures

The concepts we use in education are important. Concepts express educational values, assign status to the students with whom we work, and provide the basis for rules for governing the moral enterprise that is education.

Now and then, it is important to pause in our busy working-life to think critically about the concepts we use in education. Against the technologically driven speeding up of education, it is desirable to slow down, to contemplate if some concepts have accumulated unwarranted baggage that poses risks we might have overlooked.

Currently, I am using the method of concept analysis (Walker & Avant, 2005) in a project that is exploring ways of making better use multilingual students’ repertoire of languages-and-knowledge (Singh, 2019).

Concept analysis provides a framework that educators can use to analyse existing labels related to our working-life so as to develop guidelines for leading students’ learning futures. Findings from my research employing this method are presented below (Singh, 2017; 2018).

The aim of this conceptual analysis was to determine how the concept of ‘culturally and linguistically diverse’ (CALD) was constructed and is interpreted in education.

In determining the defining attributes of CALD, the intellectual roots for this concept can be located in the sociological theory of labelling. Where diversity is framed as a social pathology it is equated with deviance, and standing as against the stability of the prevailing cultural-linguistic order in education.

Adusei-Asante and Adibi (2018) indicate that CALD is attributed to students who are framed as problems. They ‘fail’ to meet the requirements of the cultural-linguistic order because they have limited proficiency in a particular version of English.

A historical antecedent for CALD is Australia’s Immigration Restriction Act 1901 which prohibited the educational use of languages from beyond Europe in Australia’s colleges, schools and universities. The dictation test in Section 3(a) of the Act was designed to be failed by persons who spoke languages originating outside Europe and thereby to exclude them and their languages from Australia.

In the 1970s the concept ‘Non-English Speaking Background (NESB) was applied to persons in Australia who spoke languages originating from elsewhere than Europe. However, this concept proved inappropriate for measuring linguistic diversity, overly simplistic in its approach to providing educational services, neglectful of the intellectual value of students’ linguistic diversity, and loaded with negative connotations. In its Standards for Statistics on Cultural and Language Diversity (McLennan, 1999) the Australian Bureau of Statistics stated that this concept and related terms should be avoided.

Consequently, CALD began to be used. CALD drew attention to students’ cultural-linguistic characteristics, did not label them based on what they are not, and enhanced professionalisation of those working in this field.

However, CALD is now a borderline concept because it has taken on the negative connotations of NESB (Adusei-Asante & Adibi, 2018).

CALD is now associated with the negative portrayal of students as learning problems. Further, CALD marks students as having the inability to relate to the prevailing cultural-linguistic expectations of Australian educational institutions. Specifically, CALD is the category for students having difficulty with writing in English; some are said to have no hope of learning English outside academic English literacy programs.

What are the implications of this conceptual analysis for decolonising Australia’s learning futures?

First, Australian educators who speak languages from multilingual Ghana and Iran (e.g. Adusei-Asante & Adibi, 2018), are contributing to the transformational leadership required for decolonising Australia’s learning futures.

Second, from time-to-time it is necessary to question our taken-for-granted use of concepts to explore the challenges they present, rather than treat them uncritically.

Third, to provide more precision in educational terminology there is a need for multiple concepts, rather than looking for a single concept to replace NESB or CALD.

Fourth, the century-old prohibition on the using languages from outside Europe for knowledge production and dissemination in Australia’s colleges, schools and universities must be reversed.

To illustrate the possibilities for postmonolingual education and research let us briefly consider concepts related to International Women’s Day (8th March 2019). To add educational value to the capabilities of students who speak English and Zhōngwén (中文) they could make meaning of issues relating to ‘thinking equal, building smart, innovating for change by:

  1. thinking marriage equality through Li Tingting (李婷婷) and Li Maizi (李麦子).
  2. using the cross-sociolinguistic sound similarities of Mǐ Tù (米兔) to explore what it means for sexual harassment regulations.
  3. building knowledge in METALS — mathematics, engineering, technologies, arts, language and science — through using the concept chìjiǎo lǜshī (赤脚律师) for critical thinking
  4. building research smarts through theorising population policy using the concept of shèngnǚ (剩女)

Slowing down to decolonise Australia’s learning futures reminds us that a source of educational knowledge is internal to student-teacher themselves and is to be found in their repertoire of languages-and-knowledge.

 

Acknowledgement

Thanks to the Decolonising Learning Futures: Postmonolingual Education and Research Research Cohort for their feedback on this post.

References

Adusei-Asante, K., & Adibi, H. (2018). The ‘Culturally and Linguistically Diverse’ (CALD) label: A critique using African migrants as exemplar. The Australasian Review of African Studies, 39(2), 74-94.

McLennan, W. (1999). Standards for Statistics on Cultural and Language Diversity. Canberra, Australia: Australian Bureau of Statistics.

Singh, M. (2017). Post-monolingual research methodology: Multilingual researchers democratizing theorizing and doctoral education. Education Sciences, 7(1), 28.

Walker, L. & Avant, K. (2005). The Strategies of Theory Construction in Nursing. Upper Saddle River, NJ: Pearson-Prentice Hall.

Who benefits from online marking of NAPLAN writing?

By Susanne Gannon

In 2018 most students in most schools will move to an online environment for NAPLAN. This means that students will complete all test sections on a computer or tablet. Test data that is entirely digital can be turned around more rapidly so that results will be available for schools, systems and families much faster.

The implication is that the results can be put to use to assist students with their learning, and teachers with their planning. While this appears to address one of the persistent criticisms of NAPLAN – the lag between testing and results – other questions still need to be asked about NAPLAN. Continuing concerns include high stakes contexts and perverse effects (Lingard, Thompson & Sellar, 2016), the marketization of schooling (Ragusa & Bousfueld, 2017), the hijacking of curriculum (Polesel, Rice & Dulfer, 2014) and the questionable value of NAPLAN for deep learning (beyond test performance).

Almost ten years after its introduction, NAPLAN has been normalised in Australian schooling. Despite some tweaking around the edges, the original assessment architecture remains intact. However, the move to online delivery and automated marking represents a seismic shift that demands urgent attention.

Most student responses in NAPLAN are closed questions. In the new online format these include multiple choice, checkbox, drag and drop, reordering of lists, hot text, lines that can be drawn with a cursor and short answer text boxes. These types of answers are easily scored by optical recognition software, and have been since NAPLAN was introduced.

However the NAPLAN writing task, requiring students to produce an extended original essay in response to an unseen prompt, has always been marked by trained human markers. Markers apply a detailed 10 point rubric addressing: audience, text structure, ideas, persuasive devices, vocabulary, cohesion, paragraphing, sentence structure, punctuation and spelling. In years when narrative writing is allocated, the first four criteria differ however the remaining six remain the same. Scores are allocated for each criterion, using an analytic marking approach which assumes that writing can be effectively evaluated in terms of its separate components.

It is important to stress that online marking by trained and highly experienced teachers is already a feature of high stakes assessment in Australia. In NSW, for example, HSC exams are marked by teachers via an online secure portal according to HSC rubrics. The professional learning that teachers experience through their involvement in such processes is highly valued, with the capacity to enhance their teaching of HSC writing in their own schools.

Moving to online marking (called AES or Automated Essay Scoring by ACARA, also called machine-marking, computer marking or robo-marking) as NAPLAN proposes is completely different from online marking by teachers. While the rubric will remain the same, judgement of all these criteria will be determined by algorithms, pre-programmed into software developed by Pearson, the vendor who was granted the contract. Algorithms cannot “read” for sense, style, context or overall effectiveness in the ways that human experts can. All they can do is count, match patterns, and apply proxy measures to estimate writing complexity.

ACARA’s in-house research (ACARA NASOP Research Team, 2015) insists on the validity and reliability of the software. However, a recent external evaluation of ACARA’s Report is scathing. The evaluation (Perelman, 2017), commissioned by the NSW Teachers’ Federation from a prominent US expert, argues that ACARA’s research is poorly designed and executed. ACARA would not supply the data or software to Perelman for independent examination. However it is clear that AES cannot assess key aspects of writing including audience, ideas and logic. It is least effective for analytic marking (the NAPLAN approach). It may be biased against some linguistic groups. It can easily be distorted by rewarding “verbose high scoring gibberish” (Perelman, 2017, 6). The quality of data available to teachers is unlikely to improve and may lead to perverse effects as students learn to write for robots. The risk of ‘gaming’ the test is likely to be higher than ever, and ‘teaching to the test’ will take on a whole new dimension.

Human input has been used in ACARA’s testing of AES in order to train and calibrate the software and in the future will be limited to reviewing scripts that are ‘red-flagged’ by the software. In 2018 ACARA plans to use both human and auto-marking, and to eliminate humans almost entirely from the marking process by 2019. In effect, this means that evaluation of writing quality will be hidden in a ‘black box’ which is poorly understood and kept at a distance from educational stakeholders.

The major commercial beneficiary, Pearson, is the largest edu-business in the world. Educational assessment in the UK, US and now Australia is central to its core business. Details of the contract and increased profits that will flow from the Australian government to Pearson from the automated marking of writing are not publicly available. Pearson has already been involved in NAPLAN, as several states contracted Pearson to recruit and train NAPLAN markers. Pearson have been described as a “vector of privatisation” (Hogan, 2016, 96) in Australian education, an example of the blurring of social good and private profit, and the shifting of expertise from educators and researchers to corporations.

Writing is one of the most complex areas of learning in schools. NAPLAN results show that it is the most difficult domain for schools to improve. Despite the data that schools already have, writing results have flatlined through the NAPLAN decade. Negative effects and equity gaps have worsened in the secondary years. The pattern of “negative accelerating change” (Wyatt-Smith & Jackson, 2016, 233) in NAPLAN writing requires a sharper focus on writer standards and greater support for teacher professional learning. What will not be beneficial will be furthering narrowing the scope of what can be recognised as effective writing, artfully designed and shaped for real audiences and purposes in the real world.

NAPLAN writing criteria have been criticised as overly prescriptive, so that student narratives demonstrating creativity and originality (Caldwell & White, 2016) )are penalised, and English classrooms are awash with formulaic repetitions (Spina, 2016) of persuasive writing NAPLAN-style. Automated marking may generate data faster, but the quality and usefulness of the data cannot be assumed. Sustained teacher professional learning and capacity building in the teaching of writing – beyond NAPLAN – will be a better investment in the long term. Until then, the major beneficiaries of online marking may be the commercial interests invested in its delivery.

References

ACARA NASOP Research Team (2015). An evaluation of automated scoring of NAPLAN Persuasive Writing. Available at: http://nap.edu.au/_resources/20151130_ACARA_research_paper_on_online_automated_scoring.pdf

Caldwell, D. & White, P. (2017). That’s not a narrative; this is a narrative: NAPLAN and pedagogies of storytelling. Australian Journal of Language and Literacy, 40(1), 16-27.

Hogan, A. (2016). NAPLAN and the role of edu-business: New governance, new privatisations and new partnerships in Australian education policy. Australian Educational Researcher, 43(1), 93-110.

Lingard, B., Thompson, G. & Sellar, S. (2016). National Testing in schools: An Australian Assessment. London & New York: Routledge.

Polesel, J., Rice, S. & Dulfer, N. (2014). The impact of high-stakes testing on curriculum and pedagogy: a teacher perspective from Australia. Journal of Education Policy, 29(5), 640-657.

Ragusa, A. & Bousfield, K. (2017). ‘It’s not the test, it’s how it’s used!’ Critical analysis of public response to NAPLAN and MySchool Senate Inquiry. British Journal of Sociology of Education, 38(3), 265-286.

Wyatt-Smith, C. & Jackson, C. (2016). NAPLAN data on writing: A picture of accelerating negative change. The Australian Journal of Language and Literacy, 39(3), 233-244.

 

Associate Professor Susanne Gannon is a senior researcher in the School of Education and Centre for Educational Research at Western Sydney University, Australia.

(Un)necessary teachers’ work? Lessons from England.

by Susanne Gannon

Disembarking at Heathrow a few weeks ago, my first purchase in pounds as always was a copy of The Times to read on the train into the city. The second page headline, “CR (Creative Original): Grades on schoolwork replaced by codes” (Bennett, 2017) caught my eye. Skimming the article in my dazed jetlagged state was not ideal for a critical reading but I snapped a photo with my phone of the final paragraph:

“In 2014 the government asked teachers to tell them what created unnecessary work. Three big areas were marking, planning and data management.”

I recognise the data deluge in schooling is now overwhelming, may be driven by externally imposed system imperatives and is not always put to use to improve student learning. However, I’ve spent my professional life as a secondary English teacher, tertiary teacher educator and researcher. I could not see how “marking” and “planning” are seen as “unnecessary work” for teachers.

Planning is surely at the heart of teachers’ work. Otherwise how do we claim our status as professionals? Ideally we don’t just wing it in the classroom, nor do we follow prescriptive scripts. Systematic, responsive, syllabus-informed planning of purposeful sequences of learning and meaningful resources are what make the difference for individuals and groups of students. Well-selected and fine-grained data about student progress (not necessarily only the numerical data that is favoured by educational systems) should of course inform such planning as skilled teachers identify gaps and opportunities for extension and tailor their planning to their students’ needs and their potential.

Having high expectations and creating the conditions – through careful and ideally collaborative planning – for students to succeed and to excel are hallmarks of quality teachers. These features are characteristic of exemplary teaching in disadvantaged contexts (Lampert & Burnett, 2015; Munns, Sawyer & Cole, 2013). Careful planning need not preclude flexibility, creativity and authenticity in learning and assessment practices, but conversely may enable these qualities (Hayes, Mills & Christie, 2005; Reid, 2013). As many of these authors stress, good planning is often underpinned by a disposition of teachers to become researchers of learning within their own classrooms. Where teachers are provided some agency and capacity to gather and use data then problems are less likely to be at the low level of time consuming and potentially meaningless “data management” that is perceived as “unnecessary work” by teachers in England.

Marking is of course close to my heart as a secondary English teacher and I have spent countless hours of my life providing written feedback on student work. Whilst I have become adept at designing and using outcomes based rubrics / criteria sheets since their introduction in the mid-90s with outcomes based assessment and curriculum, I have always endeavoured to provide tailored and specific feedback to students on their texts.

This for me is “marking” as a process, and I think of it – in ideal circumstances – as sometimes like a sort of dialogue on the page between student, text and teacher, and an opening towards further dialogue. It features in formative as well as summative assessment contexts (apart from exams). Now it features in the texts in progress that are thesis chapters for my current doctoral students. In a perfect world it is diagnostic, supportive, explicit and critical in combination and students will take heed. Portfolios, peer and self-assessment processes and tools can be incorporated. As Munns et al (2013) describe, sharing assessment responsibility is an important component of the insider school. The volume and pressure of marking has always been problematic however, when short timelines for results and sheer numbers of students across multiple classes work against ideal scenarios. My research into creative writing in secondary schools (e.g. Gannon, 2014) suggests how English faculties were able to work collegially to support senior students as they developed major works in English. Marking, at best, can be rewarding, encouraging and useful for students and for teachers.

Where, then, does the aversion to marking come from for teachers in England? The article in The Times does not provide any pointers towards the government survey of 2014, but is rather an announcement of a large randomised control trial to be funded by the UK-based Education Endowment Foundation, based on a Report reviewing written feedback on student work that they commissioned and recently published (Elliot et al., 2016). The opening of the executive summary of the Report provides further detail:

[T]he 2014 Workload Challenge [UK] survey identified the frequency and extent of marking requirements as a key driver of large teaching workloads. The reform of marking policies was the highest workload-related priority for 53% of respondents. More recently, the 2016 report of the Independent Teacher Workload Review Group [UK] noted that written marking had become unnecessarily burdensome for teachers and recommended that all marking should be driven by professional judgement and ‘be meaningful, manageable and motivating’. (2016, 4)

Well, of course! What has gone wrong in England that marking is not driven by these qualities. Are there lessons for us in Australia (yet again from England) of what not to do in educational reform? Although the report acknowledges that there is very little evidence or research into written marking, they nevertheless identify some inefficient and apparently widespread practices: triple-marking, awarding grades for every piece of student work (so that the grades distract students from the feedback), too many texts required from students, marking excessive numbers of student texts, provision of low level corrections rather than requiring students to take some responsibility for corrections/ improvements, and moving on without giving students time to process and respond to feedback.

Despite the caveat in the opening section, the report is worth reading in full (though it has been criticised by local critics e.g. Didau, 2016). Secondary teachers are much more inclined to put a grade on every piece of student work, they say (2016, 9). Unsurprisingly, offering clear advice on how a student may improve their work in a particular dimension seems to be more useful than broad comments (‘Good work!’) or excessively detailed and overwhelming commentary (2016, 13). Targets or personalised and specific “success criteria” may be effective, particularly where students are involved in establishing them (2016, 20; also see Munns et al., 2013).

It is in this part of the Report that the overall logic of the newspaper article becomes apparent. Buried well down into the subsection on “Targets” is the following comment:

Writing targets that are well-matched to each student’s needs could certainly make marking more time-consuming. One strategy that may reduce the time taken to use targets would be to use codes or printed targets on labels. Research suggests that there is no difference between the effectiveness of coded or uncoded feedback, providing that pupils understand what the codes mean. However the use of generic targets may make it harder to provide precise feedback. (2016, 20).

The Times headline is therefore not quite accurate. It seems that “Grades” will not be replaced by “codes” but rather that teachers’ written comments will be replaced by codes. In another article, “Schools wanted to take part in marking without grading trial” (Ward, 2017) this is called “FLASH Marking” and is an initiative developed in house by a secondary school in northwestern England that will be rolled out to 12,500 pupils in 100 schools (EEF, 2017). The school claims that teachers will now be able to mark a class of Yr 11 exam papers in an hour. Students will receive an arrow (at, above or below expected target), and codes such as CR = “creative original ideas”, and V= “ambitious vocabulary needed.”

It seems from these news stories (and presumably EEF will put up the design protocols on their website eventually) that two different factors are being measured – one is holding back grades and the other is using codes instead of written comments. I’m curious but ambivalent, after all at university it is now mandatory to use “Grademark” software for coursework students. This enables teachers to provide generic abbreviated feedback (“codes”) but also gives us the opportunity to personalize responses, and supplement these with an extended written comment, or even an audio-recorded comment. These are highly personalised and appreciated by students.

To turn back to the English example, I wonder whether the randomized control trial design (in this case an efficacy trial that will be evaluated by Durham University) means that participating schools will not be able to improvise around the conditions of the feedback? At least, if the reduction of feedback to codes proves not to improve student results, given the need for the control (or “business as usual”) group, the damage will be limited to only half the participating schools and students. The news articles are unclear about the purpose of the study – which is described as a way to reduce teacher workload more than to improve student learning. However the EEF project description also mentions, reassuringly, that the rationale is focused on student outcomes, as “specific, actionable, skills-based feedback is more useful to students than grades” (2017). The project will follow year 10 students in senior English classes through to the end of secondary school with a report to be published in 2021. Already, I can’t wait.

References

Bennett, R. (June 17, 2017). CR (Creative original idea): grades on schoolwork replaced with codes. The Times.

Didau, D. (May 18, 2016), The Learning Spy Blog.

http://www.learningspy.co.uk/assessment/marked-decline-eefs-review-evidence-written-marking/

Education Endowment Foundation (2017). Flash Marking. https://educationendowmentfoundation.org.uk/our-work/projects/flash-marking/

Elliot, V., Baird, J., Hopfenback, T., Ingram, J., Thompson, I., Usher, N., Zantout, M, Richardson, J., & Coleman, R. (2016). A Marked Improvement? A review of the evidence on written marking. Education Endowment Foundation. https://educationendowmentfoundation.org.uk/resources/-on-marking/

Gannon, S. (2014). ‘Something mysterious that we don’t understand…the beat of the human heart, the rhythm of language’: Creative writing and imaginative response in English. In B. Doecke, G.Parr & W. Saywer (Eds), Language and creativity in contemporary English classrooms (pp. 131-140). Putney: Phoenix Education.

Hayes, D., Mills, M., & Christie, P. (2005). Teachers & schooling making a difference: productive pedagogies, assessment and performance. Allen and Unwin.

Lampert, J. & Burnett, B. (Eds) (2015) Teacher Education for High Poverty Schools. Springer.

Munns, G., Sawyer, W. & Cole, B. (Eds). (2013). Exemplary Teachers of students in poverty. Routledge

Reid, J. (2013). Why Programming matters: Aporia and teacher learning in classroom practice. English in Australia. 48(3), 40-45.

Ward, H. (June 16, 2017). Schools wanted to take part in marking without grading trial. Times Education Supplement. https://www.tes.com/news/school-news/breaking-news/schools-wanted-take-part-marking-without-grading-trial

 

Dr Susanne Gannon is an Associate Professor in the School of Education and a senior researcher in the Centre for Educational Research at Western Sydney University, Australia.