Harefield

Harefield
Harefield

Friday 15 November 2013

Rhubarb, rhubarb, rhubarb

 It was one of those supermarket moments. The young man at the checkout looked at the long red stalks topped with exuberant green foliage and asked my wife what it was. My heart went out to him. To have reached your teenage years without knowing the sensuous delights of rhubarb crumble was almost unbearable – far superior to the traditional teenage delights of sex ‘n’ drugs ‘n’ rock ‘n’ roll, but then I am well over sixty and he is, perhaps, sixteen. And perhaps he just did not recognise rhubarb in its raw state and, in any case, I digress.
            My wife telling him the name of the product did not seem to help him to locate it on his checkout touch screen. We waited. In desperation, I hissed “Tell him it’s r-H-u …”. Problem (finally) solved.
   As if I needed any reminding, this served to illustrate yet again the importance of factors other than decoding (phonically or otherwise) in reading and, indeed, spelling. To be literate, to be able to read and write (and spell) with reasonable proficiency, it is not enough to be able to decode or encode words and text. You actually have to know what you are reading or writing about. You need knowledge.
This sounds painfully obvious on the surface: how could it be otherwise? But a sole preoccupation with decoding and encoding, especially when teaching low-progress readers, can sometimes distract one from the equally important task of teaching … well, stuff. It is all very well to be able to read aloud “The rodent consumed the gorgonzola” but, effectively, barking at print is all that will be achieved unless you know what a rodent is, what consume means and that gorgonzola is a type of cheese.
The problem is, of course, that learning quite a bit of this ‘stuff’ actually comes about via reading. Consequently, if you are poor at reading, you will tend to read far less and to learn less general knowledge. Moreover, many of the words and terms that we learn do not crop up frequently in everyday conversation; we only encounter them in books. Hence, low-progress readers experience what Keith Stanovich calls ‘the Matthew Effect’ whereby the rich get richer and the poor get poorer; those who can read well, read more and learn more; those who read poorly, read less and hence learn less.

All of this leads to the rather obvious point that we need to teach ‘stuff’ to low-progress readers as well as how to decode effectively and efficiently. In recent years, there has been a noticeable and welcome shift among those interested in teaching reading effectively to focus on all five of the five ‘big ideas’ of reading instruction; to give equal prominence to vocabulary and comprehension as to phonemic awareness, phonics and fluency. We also need to focus on the acquisition of general knowledge about the world and to find ways of bringing low-progress readers up to speed with what typically developing readers are already likely to know. To contemplate a world without rhubarb in it is just too sad.

Vital Signs in Reading

Anyone who has spent time in hospital or has a long term illness will be well aware of the importance doctors and nurses attach to the continual monitoring of ‘vital signs’: body temperature, heart rate (or pulse), and blood pressure (BP). Measurement of these vital signs can also be achieved very quickly, easily and frequently. What is perhaps not so commonly known is that these vital signs can be highly variable and subject to considerable fluctuation as a result of varying circumstances.           
Blood pressure measurement, for example, can fluctuate from one reading to the next and is particularly susceptible to changes in when and where it is taken and by whom. Sometimes simply being examined by a medical professional can make our blood pressure go up: the ‘white coat phenomenon’.
            But does this variability in BP measurement mean that it is useless for diagnostic or monitoring purposes? The answer is no, of course not; measures do not need to be totally reliable to be very useful; in detecting hypertension for example. We can also iron out some of the blips by taking several measures and averaging them or by taking repeated regular readings and looking at BP levels over time. Hypertension or high blood pressure is, of course, not an all or nothing affair since blood pressure is variable across individuals and is on a continuum. The BP levels we refer to as indicating degrees of hypertension are not magic markers but are, in a sense, arbitrary cut-offs that have proved in practice over time to be useful indicators for detecting potential problems.
            By the same token, there are ‘vital signs’ like BP that are very useful to us when teaching reading. We can measure reading performance reliably enough for it to be very useful to us in practice; to help us in determining which of our students need additional help, for example.
There is another parallel here with hypertension. Some people still seem to believe that dyslexia or reading disability is a clearly differentiated specific condition that is either present or it is not; all or nothing. But reading performance, like BP, is on a continuum and where we set the performance bar to indicate a reading disability is essentially arbitrary. Children vary in the extent to which they display difficulties in reading. By changing the performance criterion, we can define reading disability as referring to 5, 10 or 20 per cent of the population, for example. The decision where to place the bar is a judgement call and is likely to be influenced not only by student need but also by the resources available. To take an extreme example, there is little point identifying 50 per cent of students as being dyslexic if we have resources available to meet the needs of only 5 per cent.
            The important thing to bear in mind, then, is that reading difficulties may be present to a greater or lesser extent. Many reading researchers and specialists today would argue that defining dyslexia is a largely futile exercise and that we should concentrate instead on helping all struggling readers to perform at a level that can reasonably be considered as being within an acceptable range for their age. To help us in this endeavour we need good measures of reading performance that are reasonably reliable (like BP they will not be perfect), that are quick and easy to administer, and that we can use to screen for reading problems and to monitor the reading progress of those whose performance is of concern to us frequently, on a regular basis.
Unfortunately, many of the reading tests out there are time-consuming to administer and may only be used reliably at infrequent intervals. Such tests are not very useful to us in monitoring the reading performance of our students.
In recent years, reading researchers have been experimenting with so-called curriculum-based measures of reading that have been shown to be both remarkably reliable and valid measures of reading performance while being both quick and easy to administer. This new approach to reading assessment also allows teachers and others to test students frequently to monitor progress, by providing numerous different reading passages that have been shown to be of an equal difficulty level. (One such reading assessment instrument, the Wheldall Assessment of Reading Passages (or WARP), has recently been released by MultiLit Pty Ltd: www.multilit.com .) When such an effective reading assessment tool is available to them, teachers and others can use the data collected to make instructional decisions so as to tailor their teaching strategies to meet individual student needs.
Like hospital patients, low-progress readers must be monitored on a regular basis to ensure that the interventions being employed are working and that they are making real improvements. Educators need to be like doctors to their students, monitoring their vital signs in reading and ensuring that no student is left behind.



[Acknowledgement: I would like to acknowledge the editorial assistance of my daughter, Rachael Wheldall, with this article.]

Monday 4 February 2013

Small Bangs for Big Bucks: The long term efficacy of Reading Recovery


‘Best Evidence in Brief’ is a fortnightly email newsletter “brought to you by the Johns Hopkins School of Education's Center for Research and Reform in Education and the University of York's Institute for Effective Education”, both led by Bob Slavin. In the latest issue (January 30, 2013 http://tinyurl.com/a9bn66a), there is an interesting item entitled 'Lasting effects from Reading Recovery' citing a recent report (dated December, 2012) by Jane Hurry from the Institute of Education, University of London (the British home of Reading Recovery). Hurry’s report is entitled ‘The impact of Reading Recovery five years after intervention’ (http://tinyurl.com/avhubv9).
This is how ‘Best Evidence in Brief ‘ reports the study (with a link to the report):
“A recent report for the Every Child a Reader Trust looks at the impact of Reading Recovery five years after intervention. The program is known to have impressive effects in the short term, but less is known about its long-term effectiveness. This study suggests that the substantial gains which result from receiving Reading Recovery in Year 1 (the UK equivalent of kindergarten) continue to the end of primary school.
At the end of Year 6 (the UK equivalent of 5th grade), the study followed up 77 children who had received Reading Recovery five years earlier, 127 comparison children, and 50 children in Reading Recovery schools who had not received Reading Recovery. Findings showed that children who had received Reading Recovery made significantly greater progress in English than the comparison children by the end of Year 6. The 50 comparison children in the Reading Recovery schools were also significantly out-performing the comparison group in non-Reading Recovery schools on the reading test.”
It would be reasonable to conclude from this summary (published by two research centres apparently devoted to championing evidence-based practice) that Reading Recovery had already been shown, unequivocally, to be effective in the short term and that now there was convincing evidence for its efficacy in the longer term too, at the end of Year 6, in fact.
Many reading scientists are less convinced of the Efficacy of Reading Recovery in the short term; see, for example, Reynolds and Wheldall, 2007: http://tinyurl.com/bggyl46
But let us concentrate, for the present purposes, on the reporting of longer term effects as reported by Hurry and summarised by Best Evidence in Brief. This is how Hurry, herself, summarises her findings in her report:
“Reading Recovery is part of the Every Child a Reader strategy to enable children to make a good start in reading. Reading Recovery is well known to have impressive effects in the shorter term but less is known about its long term effectiveness. The present study followed up at the end of Year 6: 127 comparison children, 77 children who had received Reading Recovery five years earlier and 50 children in Reading Recovery schools who had not receive Reading Recovery. The children who had received Reading Recovery had made significantly greater progress in English than the comparison children by the end of Year 6, achieving on average a National Curriculum Level of 4b compared with a borderline between Level 3 and 4 in the comparison group. Comparison children in the Reading Recovery schools were also significantly outperforming the comparison in non Reading Recovery schools on the reading test. 78% of Reading Recovery children achieved Level 4 in English compared with 62% in the comparison group in non Reading Recovery schools and 64% for the comparison children in Reading Recovery schools. There was a tendency for Reading Recovery children to be receiving less SEN provision than children in the other two groups, but this only reached statistical significance for those on School Action Plus or a Statement. This suggests that the substantial gains which result from receiving Reading Recovery in Year 1 continue to deliver a significant advantage for those children at the end of the primary phase, providing a surer footing for transition to secondary school.” (P. 3)
If you read the details in the actual report, however, the facts for the end of Year 6 results are reported by the author (Hurry) as follows:
“The Reading Recovery children were still doing significantly better in reading (β=.191,  p<.005), effect size (Cohen’s d) = .39) and writing (β=.162, p<.013, effect size (Cohen’s d) = .33) than the comparison children in non Reading Recovery schools. They were also scoring significantly higher on their maths test (β=.154, p<.036, effect size (Cohen’s d) = .31). However, they were not significantly better than the comparison children from the Reading Recovery schools on any of the measures (reading, writing or maths). Indeed the comparison children from Reading Recovery schools, ie. those that were poor readers at six but did not receive the programme, were also doing significantly better in reading than the comparison children from non RR schools (β=.222, p<.002, effect size (Cohen’s d) = .24).” (p.12) (present author’s emphasis)
So, let’s be clear about this. The children who received Reading Recovery did not perform significantly better than the comparison students from the same school who had not received Reading Recovery. (Note: no details of statistical significance testing or effect sizes are reported for these comparisons.)
But the Conclusions section of the report states (and this is the only conclusion iterated):
“These findings indicate that effects of Reading Recovery are still apparent at the end of Year 6 and that even the children who attended Reading Recovery schools but were not offered the programme benefited somewhat from the ECaR programme.” (P. 22)
It is a source of some consternation to reflect on the fact that neither the report’s author (Hurry) nor the writers of ‘Best Evidence in Brief’ appear to have considered the (to me) obvious alternative conclusion: that there is no need to actually take part in Reading Recovery; merely attending a Reading Recovery school appears to be sufficient!
Another interpretation of these data is that they provide no evidence for the long term efficacy of Reading Recovery because those children in the school who did receive Reading Recovery performed no better than those who did not. And both of these groups performed better than the comparison children in the non-Reading Recovery schools. In other words, there is no discernable effect for the program per se, only for differences between schools. Moreover, even the significant differences between the two groups in the Reading Recovery Schools and the non-Reading Recovery school are accompanied by only small effect sizes, all of which are below Hattie’s hinge value of 0.4
Considering the huge expense involved in one to one Reading Recovery tutoring, these are very small bangs for very big bucks.

Friday 18 January 2013

PIRLS before Swine: Or why Australia sucks at reading


John Lennon was renowned for his sharp, and oft times acidic, wit. When asked if Ringo was the best drummer in the world, he responded that Ringo was not even the best drummer in the Beatles! I was reminded of this when reviewing the latest (2011) results from the Progress in International Reading Literacy Study (or PIRLS). Not only were Australian students not the best readers in the world, they were not even the best readers among the English speaking nations surveyed. They were, in fact, the worst. We can take small comfort from the fact that New Zealand performed only marginally (and not significantly) better than Australia.
The PIRLS project essentially assesses reading comprehension by requiring students to read selected texts and then to answer questions about the material read. Year 4 students are assessed because this typically marks the point of transition from learning to read to reading to learn. (Note that 2011 was the first time that Australia had taken part in PIRLS.)
Overall, 45 countries were included in the study (excluding those countries who tested older or younger readers or who took part for their own internal benchmarking purposes, and whose results are not reported). Australia came 27th in the league table of countries (with a mean score of 527), below all other English speaking countries and significantly lower than 21 other countries overall, including all other English speaking countries except New Zealand (mean score 531).
To put this in perspective let’s look at how some of these other English-speaking countries performed. Singapore (mean score 567), for example, came 4th, one of the four top performing countries significantly above the others. Northern Ireland came fifth (mean score 558) and the United States came 6th (mean score 556) (compared with 14th out of 40 in 2006). England came 11th (compared with 15th out of 40 in 2006) and Canada (mean score 548) came 12th. (Note the improvements in performance from 2006 to 2011 by both England and the United States.)
As well as reporting mean scores by country, PIRLS also provides details of performance against four benchmarks: Advanced, High, Intermediate, and Low (and those who fail even to qualify for Low ie Below Low). In Australia 7% of students failed even to meet the Low benchmark and a further 17% met only the Low benchmark. Here are comparison percentage figures for other English speaking countries of interest: 
                        Below Low    Low     Combined              

    


Northern Ireland 3 10 13
Singapore 3 10 13
Canada 2 12 14
United States 2 12 14
England 5 12 17
New Zealand 8 17 25
(Australia) 7 17 24
In summary, Australia and New Zealand have over twice as many students failing to meet even the minimal Low standard as Northern Ireland, Singapore, Canada and the United States; and over one and three quarters times as many low-performing students overall (Below Low and Low combined). England falls in the middle of these two groups of countries.
These results may have come as a shock to many educationists who had been blithely arguing that there was no literacy crisis in Australia. But they provided simple confirmation for Australian reading scientists who had been warning of this problem for some time and had argued (remarkably accurately, as it turns out) that a quarter of Australia’s students could be regarded as low-progress readers. In 2004, a group of Australian reading scientists and clinicians wrote an open letter to the then Federal Minister of Education, Brendan Nelson, arguing the need for reform regarding the way reading is taught in Australia and the need for literacy teaching to be based on the available scientific evidence. The subsequent National Inquiry into the Teaching of Reading reported at the end of 2005, essentially reiterating these concerns and stating clearly what needed to be done to improve reading standards in Australia. In short, the Report was subsequently simply ignored.
Moreover, following the implementation of the National Assessments Program – Literacy and Numeracy (or NAPLAN) from 2008, we were subsequently assured (annually) that all was well on the reading front. As recently as in 2012, we were reassured that for the performance of Year 3 students in Reading only 4.4% were in Band 1, having failed to meet the National Minimum Standard, and only a further 9.4% were in Band 2 ie at the national Minimum Standard (a combined total of 13.8% of students). (Note that the NAPLAN measure of reading is similar to the reading comprehension measure employed by PIRLS.)
Clearly, we have been deluding ourselves by measuring student reading performance against unrealistically low benchmarks that do not withstand international scrutiny. NAPLAN, as a reading performance measure, needs to be recalibrated against international standards so that Bands 1 and 2 combined ‘capture’ the bottom performing 25% (not the current 14%) of students. (These low-progress readers should subsequently be earmarked for immediate additional instructional support.) Moreover, it should not be beyond the wit of the NAPLAN methodologists to tie the NAPLAN scale to the metric employed by PIRLS so that progress towards achieving the international standards could readily be monitored
Finally, it is interesting to note that two English-speaking countries that have begun to take reading instruction seriously in recent years, and who have urged that reading instruction be based on the best available scientific evidence, namely the US and England, have both improved their international standing substantially in the PIRLS league table from 2006 to 2011. Similarly, the two English-speaking countries that have performed so poorly, namely Australia and New Zealand, are those that have clung most tenaciously to the discredited ‘philosophy’ of whole language literacy instruction. Can this be simple coincidence? I think not.

Reference note:
A summary of Australia’s performance in PIRLS 2011 is provided in:
Thompson, S., Hillman, K., Wernert, N., Schmid, M., Buckley, S., & Munene, A. (2012). Highlights from TIMMS & PIRLS 2011from Australia's perspective. Melbourne: Australian Council for Educational Research.