Expert comment

The bad ‘science’ underpinning baseline assessment in primary schools

Home

The bad ‘science’ underpinning baseline assessment in primary schools

Dr Kate Smith explains why the government’s new baseline assessment for children is unlikely to show us much about children’s progress as learners.

This term the government’s  Reception baseline assessment  (RBA) is being piloted in a number of reception classrooms in England with a national roll-out due for Sept 2020. However objections from parents and professionals have been numerous and a BERA report has described it as ‘flawed, unjustified and wholly unfit for purpose’.

Yet despite this, the headteachers union (NAAHT) — although recognising the challenges of developing a reliable and workable assessment — is embracing RBA, arguing that it is a better indicator of school effectiveness than attainment figures alone. This makes sense if the test accurately measures progress from the ‘starting point’ to the ‘end point’. But can it?

If this generalised test is justified in its ability to create comparable data sets and potential predictions of children’s progress, then on what scientific basis is it founded? Are these tests really accurate and scientifically justified?

We know from centuries of research and practice with young children that they learn holistically; their social, emotional, physical and cognitive experiences coming together to create unique knowledge of the world. This is an approach recognised by the OECD as a predictor of quality in early childhood education. RBA undermines this by focussing solely on an idea of cognition that is based on the child knowing measurable ‘facts’, separating her from her social, emotional and physical entity. Also, as a test that foregrounds cognition, surprisingly it ignores current psychological research showing that educational success is dependent less on ‘facts’ but more on mindset and metacognition – the child’s sense of themselves as a learner and how they can improve.

By disregarding the underpinning scientific body of knowledge about young children, RBA is immediately suspect and therefore so is the data it produces. However, there are tried and tested scientific alternatives to help us create more rigorous data, namely observation. When scientists observe they aren’t just looking for what they already know; their gaze isn’t narrowed to simple check lists. They are aiming to see changes and differences over time and in different contexts. Importantly they are aware of prior research that can inform them of what they are looking at; to help them ‘see’ better.

Maria Monstessori, as a scientist understood the importance investigating, describing and identifying the ‘natural phenomena’ of learning through observation. More recently, the Reggio Emilia approach founded on a ‘pedagogy of listening’ employs creativity to ‘see’ children’s capabilities and potential. Like scientists these practitioners are curious and questioning about what they are examining (in this case children) and what they can tell us about their world.  

Importantly, there is also an ethical dimension that needs to be considered in any scientific endeavour; the effect it has on people and society, something that is being brushed aside in the implementation of RBA. Parents, practitioners and researchers have raised ethical concerns about the child as ‘subject’ within these tests, the way in which they are ‘used’ for government purposes and the potential for future harm. 

If assessment tests are deemed to be scientific they should create new knowledge in an ethical way. RBA is very flimsy in this respect; it’s a tool to produce data that ‘fits’ with other flawed data, an example of bad science that is unlikely to show us much about children’s progress as learners.

Dr Kate Smith is Senior Lecturer in Childhood and Early Childhood Studies, in the Faculty of Education.

Share this page: