06 Sep Research Literacy: Literacy Research
This post contains the key ideas and materials from my presentation at ResearchEd 2015. It seems to me that a high proportion of discussion about research in education doesn’t actually make reference to specific research evidence or trials. I’ve decided that every time I give a talk at ResearchEd, I will look at the details behind some studies to see what they tell us. This is partly to learn about the subject matter but also to explore the business of conducting educational research and our capacity to engage with it and trust the findings.
Last year, I whizzed through several studies of different kinds – as featured in this Do Your Homework post. This year I focused on literacy catch-up schemes as this is an area that we’re investigating at Highbury Grove.
The slides are here – although, as ever, I don’t know how coherent they are without the commentary:
My initial google search led me directly to this DFE document 2012:
There is a table in the DFE publication that ranks interventions by their effect size. It’s significant that most of the top scoring studies feature 1:1 schemes – ie where students are taught individually. That’s a recurring finding through my explorations. Way out in front is a study called Literacy Acceleration in Cornwall with an effect size of 1.14. I wanted to find out more so I followed the citation: Brooks (2007)
This turns out to be the work of a researcher called Greg Brooks who is basically the leading expert in the field, responsible for major reviews of all literacy studies in the UK: Here is the link to the pdf.
In this publication Greg gives comprehensive details of the all the studies, their methodologies and their effect sizes. He also uses a scale called Ratio Gain or RG which is nicely intuitive measure. Basically, if students make 12 months progress on average on reading age tests 6 months apart, the RG has a value of 2. Significantly, Greg’s reports are highly objective and he does not engage in any ranking; the studies are in alphabetical order. The page for the Literacy Acceleration is included in the slides. Sure enough, the effect size is 1.14. However, looking at the detail it seems that the study was conducted by a PhD student (Lingard) in 1994 with just 26 students in one school. In fact the same student conducted other trials with the same process that scored 0.37 and 0.23; actually these are also included in the DFE list that I started with. So…. it turns out that the top ranked literacy intervention in the DFE list is very small study that is over 20 years old on the scale that any teacher could conduct in their own classroom; the sort of scale that is often dismissed as action research that doesn’t really count! The DFE official who wrote the publication had simply lifted the effect sizes and produced a ranked table without even trying to explore the substance between the trials. Teachers beware! We can’t simply trust DFE publications… they’re too lazy to do the work needed to inform our decision.
I then noted that Greg Brooks had updated the 2007 survey in 2013. Here is the link and, if you’re interested in educational research, I recommend reading this. It’s a superb piece of work.
In this update, surprise surprise, Literacy Acceleration has been removed. It didn’t meet Greg’s criteria (shown in my slides) on many fronts including the fact that it has ceased to exist and the sample was below 30 students. The slides show some of the details of the secondary intervention studies. It is striking to me how few studies have ever been done on a large scale. Most studies involve fewer than 100 students; some span only a few months; it’s rare for a study to go over a year. That shows how little emphasis we place on these studies in general; some quite famous and widely used schemes don’t seem to have been subjected to high level studies at all. If you read through all the studies it seems that the best evidence comes from 1:1 approaches where students are taught to read directly – expensive perhaps, but effective. Greg Brooks makes some helpful overall conclusions which are in the slides are reproduced here:
The first one is obvious but actually a very powerful statement: ordinary teaching (no treatment) does not help children with literacy difficulties to catch up! In other words – we have to do something!
I was interested to note that, in 2013, despite it being one of the most widely used schemes, Accelerated Reader, does not feature in this publication. It was dropped from the 2007 version because the studies didn’t meet the criteria. Most were simply too small or showed results that weren’t strong enough to be included! We have used Accelerated Reader for a few years in common with hundreds of UK schools and our staff have had various concerns about it; there are too many variables to control, too many students don’t make much progress, the IT needs limit how much access any one student gets and, significantly, where it works best is where we have TAs who can effectively give 1:1 support! I was delighted to learn that EEF has undertaken a study published only this year in February:
Again, it is well worth reading simply as an example of educational research. It involved four schools and around 380 students in a formal RCT supervised by EEF researchers. The report explores the effectiveness of Accelerated Reader but also makes numerous references to the process of conducting the trial itself. It illustrates just how complex this can be. The key findings are included in the slides – both in data terms and the researchers’ overall comments. In essence the trial did show a positive effect but only in what I would consider optimal conditions. Many of the concerns we have are reflected in the comments in the report. This study does not sway me in my view that Accelerated Reader isn’t sufficiently targeted or effective – mainly because it is designed to incentivise reading, not to teach reading: a crucial difference. We’ve decided to phase it out.
My presentation concluded by looking at a programme we’ve decided to explore from this year onwards: Thinking Reading. Although I didn’t mention this in my presentation, I think the original recommendation came from David Didau who ran a literacy session with our Learning Support team last term. He wasn’t giving it a hard sell – he simply suggested it might be worth a look. The scheme involves TAs being trained in how to deliver a specific 1:1 programme that teaches reading and spelling systematically, gathering assessment information every session: progress is tracked very closely. Thinking Reading is included in Greg Brooks 2013 review. He suggests that their RG of 5 delivered over an unusually long period of 14 months, with 44 students represents an impressive outcome – albeit based on data provided by the programme itself, (in common with many other studies.) Some individual results are very significant -see the slides. It is the fact that it can be tested for impact very directly that has sold it to me. I’ll know if it works. Also, I’m convinced that investing in intensive support ( 3 x 30 mins per week) for the most needy students has far greater hope of delivering value than spreading a programme like Accelerated Reader more thinly across more students; too many slip through.
The presentation ended in a fairly bizarre manner as it turned out that Dianne and James Murphy who run Thinking Reading were both in the session! They didn’t know I’d be talking about their scheme… It was great to talk afterwards and discuss things in more detail.
I ended the session by encouraging more people to accept invitations to get involved with formal trials. We’ll be contributing our Thinking Reading data to the overall evaluation of the scheme. We could have been doing the same with Accelerated Reader. Really, for too long, we’ve been guessing our way through and we need to be much more systematic in the way we evaluate initiatives like this – both at school level and nationally.
UPDATE April 2016: With thanks to Dianne Murphy ( @ThinkReadTweet from Thinking Reading) here is the very latest report from Prof Brooks: