22 March 2013

Data Delusion Solutions Part 1

Filed in Leadership Issues

Following the my last Data Delusion post, I’ve had an interesting response in three forms: 1) Joyful but misguided approval:  data is all rubbish, we don’t need it… we’re off the hook… can’t wait to tell my Head/ HoD….    I’m actually arguing for a…

Data Delusion Solutions Part 1

Following the my last Data Delusion post, I’ve had an interesting response in three forms:

1) Joyful but misguided approval:  data is all rubbish, we don’t need it… we’re off the hook… can’t wait to tell my Head/ HoD….    I’m actually arguing for a more sophisticated contextualised understanding of educational measurement and its limits which is different. Better data; meaningful data; data in it’s right place.. not no data.

2) Tsk, Tsk, Data is important, you’ve missed the point, it can be really useful to raise standards; some schools or teachers hide behind data-scepticism to mask under-performance and you’ve just given them ammunition.  My feeling here is that we have to have a debate about our basic assumptions (eg what a grade actually means..) without worrying that people might misunderstand.  If data-devotees can’t withstand a debate about what 6c means before they do the ‘levels of progress’ calculation, they’re not likely to be doing the students any favours.

3) Thankfully, plenty of people do get what I’m saying:  Assessment is complex… we lose the detail through over-simplification and averaging and mass-data systems run the risk of becoming so far removed from the source (the actual learning) that they lose meaning – so we need to try harder to be more sophisticated with data, more nuanced, more tolerant of uncertainties and so on, rather than merely taking crude data at face value.  We also need to face the reality that a lot of educational measurement is based on norm-referencing and that most school outputs are a ranking measure of one form or another.  Valued Added at KS4, for example, is really a measure of how much the students’ bell curve position has changed over time on average – it is a contest.

Some people have asked me what I suggest as alternatives.  Here are some ideas.  I’m not saying that these things solve the puzzle… but ,as I am fond of saying, if you don’t like an idea, you need a better one.

ABRSM Assessment: Grade, Score,  Components, Feedback

ABRSM Assessment: Grade, Score, Components, Feedback

ABRSM Exams

Look at the illustration:  A recent piano exam report from a Grade 3 Piano exam.  What I like about this is that you can drill down to various levels.

  • John has achieved Grade 3 in Piano.
  • John has a Pass at Grade 3
  • John has 113 Points, which is  a Pass, just over half way to a Merit… at Grade 3
  • The 113 points arise from six components….as shown: each piece, scales, oral tests, sight reading..
  • Each component was scored with a written justification provided, against a published set of criteria. (see ABRSM website)

The student, the teacher and the parents can use this information to take stock of the performance at this level and use it developmentally to prepare for the next level.  Isn’t that fantastic!  The consistency in this process is high; these exams have credibility and the examiners have to undergo very rigorous training and moderation.

Imagine if our assessments or GCSEs were like that: An overall summary grade but linked to some criteria, detailed scores, and a justification to provide feedback .  What we get now is highly secretive; it costs money to get your exam paper back; there is no feedback or justification; there is a grade, and in some cases,  component scores.   Imagine if John just received: Piano Grade 3 Pass (or Merit)…..and then an official body averaged up all the students with Passes, introduced another arbitrary points score informing the teacher the average score per student etc…    So I am saying it this:  let’s have higher expectations of our public exam system and try to get closer to the ABRSM model.

Locally Defined Grades: KS3

At KEGS, which is a selective school, the National Curriculum levels have never helped to raise attainment.  We used to use them but when I arrived I found that there was quite a lot of inconsistency; no-one was quite sure what constituted Exceptional Performance or Level 8 or Level 7; some departments used A-E grading; some used %s for tests and only a few areas actually used NC Levels formatively.  It was impossible for us to use levels to measure progress.   So, to create a coherent umbrella framework for our context, we devised our own system.  The details of it are published on our moodle-VLE called KEGSnet:  http://kegsnet.org.uk/course/view.php?id=264

Essentially, we use a simple *,1,2,3 system to describe the learning expected in each year in each subject in a way that makes sense for the teachers in that area.  It only works because the boys are broadly aiming for the same objectives  – but the idea that attainment measures are defined in context is transferable.

Attainment in Year 8 History

Attainment in Year 8 History

Here is one for a specific Geography Assignment

Geography Attainment is specific to each assignment.

Geography Attainment is specific to each assignment.

In Maths, the full Scheme of Work is published to parents and students and attainment is measured in terms of attaining certain %s in the tests consistently across the year.  In practice this is what teachers do to create NC levels;  here we just cut that superfluous information out and reference attainment directly to the specific content. eg a * reguires 90% where as 1 requires 65%.

We are into our third year with the system and it is working well. However, we are clear that it is just a general guide; there are no absolutes.  We need to review and develop the system continually but it makes a lot more sense than what he had before.  Each data point has specific meaning and that is important.  However, we don’t overplay it and focus more on attitude to learning and the micro of short-term progress across lessons in order to secure improvement.

Analysis of Examination Outcomes

We spend a lot of time at KEGS exploring our examination outcomes.  Each department produces a comprehensive post-mortem, in a style that suits them,  looking at exam components, break-down by teaching group, gender and various other factors.  We look in detail at RAISEOnline; we use the ALPS system for A level.  We have graphs plotting outcomes over time – by A/A*% or A*-C% or by total points or average points per entry; we slice and dice every which way.  There are hundreds of data measures to look at and we do all of that with rigour.

However, from all of that we are looking for two things:

1) Data that points to specific issues that we can address through specific actions.  This has led recently to tighening up some essay structure issues in a subject; spending CPD time looking at a Further Maths module where the content is quite challenging; re-jigging the timeline for delivering a science course and putting processes in place to support completion to deadlines in a practical subject with 60% coursework.   This use of data is essential;  nitty-gritty  analysis where the actions improve real learning.   Importantly, we often feel that, beyond a certain point, the data can’t tell you more.  We ask: will this extra analysis lead to any more actions to those we are already taking?  If the answer is no, we leave it.  Very often this applies to RAISEOnLine and ALPS – they do not indicate any actions that we are not already taking. The best information comes from exam board component scores where trends and patterns are the most closely linked to the learning.

2) Data that can tell the story of achievement to parents and the community. We recognise our obligation to report outcomes to the outside world – but here the balance is between too much (making it hard to access) or too little (making it less meaningful).  Even at school level, where you have the chance to write to parents with a full account -as I did this year – it is a difficult balance to strike.  When the data is presented at face value it is wide open to false comparisons and misinterpretation so I am inclined to give the detail.  I wrote this post last year after the A level results: http://headguruteacher.com/2012/08/16/the-folly-of-narrow-newspaper-league-tables/  We’re in a comfortable position I know – but the general point is the same; this information gets garbled all too easily.

My conclusion from this is that public data should not be presented raw; it is almost always misleading in some way and gives rise to a limited view of a school.  eg in my area of London two nearby schools have data profiles a bit like this:

  • School A: 77% A*-C with EM; 15% A/A* VA = 1020
  • School B 73% A-C with EM, 44% A/A* VA = 995

Which school do you want your kids to go to? Which is a ‘better’ school?..I might have my bias but obviously we need more information.

In the same way as an OfSTED report has a commentary alongside the grades and we don’t get league tables of grades published by the DFE, I think we should have a School Data Report – and not the pages of raw data.  Each school has a context; the data can tell a story – if you know it.  Parents should see a wide range of data points alongside an account of how these measures inter-relate and what they say about attainment outcomes, adding value and improvement over time.  The data on its own does not tell the story properly and we need to recognise that.

Headteachers’ Roundtable English Baccalaureate Framework.

I’ve been working with the HTRT on this model since January and I believe we’re on to something.  I won’t repeat it all here – but please do take time to read the proposals.  The transcript idea is a key feature as well as moving to points instead of grades, removing cliff-edge effects.

http://headteachersroundtable.wordpress.com/the-htrt-qualifications-framework-proposal/

No Comments
  • David Bishop
    Posted at 14:33h, 22 March

    On a very quick glance at the proposed qualifications framework it seems to mirror the IB Middle Years programme and the full IB.

    • headguruteacher
      Posted at 14:36h, 22 March

      I would hope that it would. One goal for me would be that IB schools that also offer A levels could switch to this model so everyone could do the same thing. We might need a TOK element.

      • David Bishop
        Posted at 14:42h, 22 March

        I am an advocate of the IB and MYP, PYP. I am really pleased that this route has been taken. I will be interested to look at it in further detail over the weekend, I agree a TOK element would make it a fuller more substantial qualification. With best wishes David

  • behrfacts
    Posted at 15:08h, 22 March

    Some more good stuff Tom and I agree that just allowing (prospective) parents access to lots of raw data, which is the direction the Govt is heading in, is not helpful for the majority of them. On the other hand it does put pressure on schools/awarding organisations to ensure that there is adequate context for it and perhaps even some good graphics. I like your 4 levels of grading at KS3 and the fact that different subjects interpret this differently and that ‘effort’ is an even simpler grading. I assume that the HTRT model is a development from this for KS4 and 5 but which aligns with current number of GCSE and A-level grades. Should KS1 and 2 be even simpler and only have 2-3 grades? Just to note that even Ofqual is trying to explain in simpler terms the exams grading system at: http://www.ofqual.gov.uk/help-and-advice/about-marking-and-grading/ . if you read it you realise just how value-based the system is …

  • Messy education (Part 2) - The Sunday Times Festival of Education
    Posted at 09:17h, 26 March

    […] Learning is messy.  It is also hard.  As Malcolm Gladwell has written in Outliers, to become really good at something you need to spend at least one thousand hours doing it.  This resonates with many teachers who have, in exasperation, told their students that they need to spend more time…well, just working harder and for longer. There often really is no better way for improving in, say, English, than reading and writing more. But certain factors work against this: not enough curriculum time, not enough one-to-one time between teacher and student, and, linked to this, not enough quick and meaningful feedback from teachers to their students. And so perhaps in response to this we try to measure achievement in ways that do everything except what their principal function ought to be: namely, give an objective evaluation of progress and attainment.  The data schools and exam boards collect is often used in a rather reductive and blunt way that does not promote higher achievement (read Tom Sherrington’s excellent blog on this here). […]

  • My Blog Manifesto | headguruteacher
    Posted at 23:13h, 09 July

    […] See Data Delusion, Data Delusion Solutions […]

  • My blog manifesto
    Posted at 08:27h, 11 July

    […] See Data Delusion, Data Delusion Solutions […]

  • Assessment, Standards and the Bell Curve | headguruteacher
    Posted at 23:08h, 17 July

    […] standards, but every aspect has an origin in relative standards.  As I’ve described in Data Delusion Solutions, piano exams are excellent.  In passing Grade 1  – and since 2 and 3 –  my son did not feel […]

  • Assessment, standards and the bell curve
    Posted at 16:59h, 19 July

    […] absolute standards, but every aspect has an origin in relative standards. As I’ve described in Data Delusion Solutions, piano exams are excellent. In passing Grade 1  – and since 2 and 3 –  my son did not feel that […]

  • Exam Reform. Another blog manifesto. | headguruteacher
    Posted at 21:32h, 24 July

    […] Data Delusion Solutions […]

  • Exam Reform. Another blog manifesto.
    Posted at 06:43h, 26 July

    […] Data Delusion Solutions […]