04 Dec How do I know how good my teachers are?
At the heart of the discourse about effective schooling is the well-evidenced view that teacher quality plays a massive role in determining student outcomes. John Hattie, Dylan Wiliam, Michael Wilshaw, Michael Gove…they’d all agree on this. We’d all agree on it. As a Headteacher it is one of my core responsibilities, no…it is THE key responsibility, to ensure that teacher quality and the quality of teaching are as good as they can possibly be. I try hard to create the conditions for great teachers to grow and to thrive...but how do I know how good they are and what impact they are having?
There are broadly three inter-related areas that combine to develop a rounded picture of a teacher’s effectiveness:
Most obviously this is about examination results and internal assessment data. If a teacher can secure good assessment outcomes, you’re inclined to be less concerned about how they achieve that. There are degrees of success too; sometimes results are good but not excellent; sometimes the rate of improvement is slower than it could be; it is a subtle business and you need to know about the ability profile of each class and other factors. Beyond the numbers, of course, there is much much more to learning than can be measured. It is possible to grind out results from uninspiring teaching (I’ve done it myself). Conversely, teachers who shine in observations might not be quite nailing the exam preparation and results might be disappointing. So – data is only one factor and it cuts both ways. There are other metrics – such as information on behaviour incidents and referrals – that might tell you a teacher rarely uses or is over-reliant on support systems. Again, context is key- but it is all part of the picture.
This is the headline grabber; the big focus during OfSTED inspections and a bone of contention with some unions. (‘Surveillance’? Get over yourselves…) Seeing a teacher in action first hand is a rich source of information but we need to be cautious. Whether it is a drop-in or a full-blown formal observation, it doesn’t always follow that what you see is typical…. things might not be working well or you might be seeing a one-off performance. Observations are always slightly artificial because of the observer effect; they are limited to being snap-shots in a continuum of lessons – so you never see a full learning episode – and, ultimately, what you really care about are the 99% of lessons that you don’t see. Over time, you accumulate information about a teacher over multiple observations of all kinds… but need to be careful not to fix your view of someone based on the past. People change – they might improve or they might drift. The more current your observation data-set is, the better – and of course, observations can be done by lots of different people.
This is the cumulative store of micro-feedback that accrues over time around every teacher in a school. Teachers generate feedback continually – from students, via parents, via colleagues, from line managers, through conversations, snatched glimpses of lessons, comments in staff meetings, parents’ evenings, CPD events, email exchanges… drip, drip, drip. Teachers have reputations – it is unavoidable. This could be because they are inspiring, strict, funny, eccentric, know their subject, soft, talk too much, make lessons exciting….. In my experience, this knowledge store is under-estimated in the formal accountability processes. If I’m asked how I know the strengths of my teachers, there is truth in saying ‘I just do’. Students and parents will rave about some teachers and not about others; – that tells you a lot. I reckon my daughter’s evaluation of her teachers would be a fair indication of what I’d see in her lessons; I know them in ways that I bet their Headteacher doesn’t. This information seeps out and around us…. And it gets back to me as Head one way or another. Again, there is context. RateMyTeacher, for example, is a disgusting disgrace –I wish we could shut it down. You obviously need to apply a filter to this noise of feedback… but it is real enough; it matters; it counts – and in many cases, it is more accurate than the one-off observations, most often in a teacher’s favour.
The important point is that all three forms of data inter-relate in a complex non-linear fashion. Ideally, a teacher will rate highly in all three areas. That is the sign of really great teacher – when they create a virtuous circle. Their lessons are great – evidenced by any number of observations; their teaching generates excellent outcomes and both of these things create strongly positive reputational feedback – the knowledge data. But it is quite common that only two would apply. I’ve known every scenario:
- A teacher who has a reputation as a fabulous teacher, who produces superb lessons during formal observations… but where, frustratingly, the results aren’t what we’d expect. Often this is due to some technical issue with matching the curriculum with the assessment or preparation for formal exams. But you have hope. Usually these issues can be resolved with support.
- A teacher who gets great results and who scores highly on the reputational scale, but underperforms during formal observations. Here, you need to have confidence in the two positive data-sets and question whether the observation process has given you good information. Is it fair to over-ride the other data-points in your knowledge bank, based on a couple of lessons that didn’t impress? You need to work with the teacher but take care not to over-state the hoop-jumping aspect of formal observation.
- Finally, a teacher who seems to get great results and can nail an Outstanding formal observation but, for one reason or another – generates negative reputational feedback; either parental or student complains, concerns from colleagues or line managers and so on. Here, you need to be super cautious but it can indicate that day-to-day lessons may not be providing the rich learning experience that they might be. (For example, I can think of a teacher I’ve known who made students copy extensive notes off the board literally every single lesson – oh, except during the OfSTED observation. Seriously!) Of the three, this is the greatest problem. It is hardest to tackle and often suggests some attitudinal issues that are tricky to resolve.
Obviously, falling down in more than one area is where more serious support and intervention are required and ‘capability’ normally only kicks in if you’re worried about all three. You may notice that there some omissions. Teachers need to do a lot more than meet basic professional standards and follow school policies; it doesn’t matter if they are a ‘great person’ or give a lot time to extra-curricular activities when you are evaluating their work as teacher. Some people work incredibly hard and give their all for the students – but that isn’t enough to make them effective. We need to be honest about that. Weak teachers are not bad people and often play an important role in the community…. And the converse is also true!
What does this tell us?
Firstly is suggests that external accountability processes are flawed in a fundamental way. There is a place for external inspection of lesson quality – but the whole process needs to be more sophisticated, taking much more account of the school’s view of its teachers. Is it possible or meaningful to assess the quality of teaching in a school by seeing 30 or 40 or 50 half-lesson snapshots? It would certainly tell you a lot about the school but it won’t be the full story. I’m confident that I know how good the teaching and teachers are in my school – and I’m not sure that inspection processes allow me to get that across.
Secondly – and here is the main point – it tells us that the 99% of non-observed lessons are the ones we should be more bothered about. So much energy is wasted on hoop-jumping for inspection – but it is all the other lessons that drive excellent assessment outcomes and generate positive feedback from students, parents and everyone else. What we should be doing is worrying less about the snapshots, the one-off showcase circuses, and worrying more about ensuring our routine practice is as strong as it can be. Securing strong assessment outcomes and having lessons that are engaging and inspiring are not mutually exclusive! We can aim at both. To stop the hoop-jumping, we should apply a more critical eye to our own practice, ensuring assessment evidence feeds back into the learning in our lessons. Doing this individually and in our teams allows us to move forward without feeling we are wasting energy on artificial external accountability.
Finally, the message for leaders at any level is that we need to generate a rounded view and be cautious in making partial judgements…. How well do you know your staff? How do you know what you know? Which bits of information do you value over others? Let’s make sure we are acting intelligently, using the most sophisticated tools we have to see things in the round so that we can create the trust culture we need for growing outstanding teachers across our schools.
Soon after writing this, the Government made an announcement about pay scales being linked to performance. I wrote this article for Labour Teachers. http://www.labourteachers.org.uk/blog/2012/12/18/performance-related-pay-wrong-diagnosis-wrong-solution/