03 Jun Evaluating Teaching and Learning: The Departmental Review
One of the key issues for us, as it is in any school, is to ensure that the quality of teaching and learning is as good as it can be. This requires us to engage every teacher and every department in a continuing cycle of evaluation, feedback and planned improvement. Over the last two years the main vehicle for this process has been our Departmental Review, as I described in this post. The key aspect of this process is that individual observations and scrutiny processes are conducted under the umbrella of a whole departmental review so that collective learning is undertaken in parallel with the development of individuals:
Originally, our main aim was to ensure that we captured as much developmental value as we could from the formal observation process. We even ditched giving OfSTED grades for a year to reclaim the core purpose of observation as a feature of CPD rather than primarily an accountability mechanism. However, with the framework changing so many times since our last full inspection in 2006 (yes… that’s right!) we invited a team of external observers last summer to help keep us up-to-date and we’ve returned to the grading this year as part of the second cycle of the process. We’ve learned a great deal about the value and impact of external visitors making snap-shot judgements and about our self-belief in terms of the quality of what we’re doing.
In thinking about the next cycle, next year, two key observations have been important to me:
- I watched a lesson alongside an observer taught by someone who I believe is a cast-iron teaching expert, who year on year secures extraordinary outcomes and who I feel knows their subject so well that if they think teaching a certain way is appropriate, no-one bar none (and certainly no inspector) could really argue. So how on Earth did we end up accepting that this lesson segment was judged ‘Good’ without running the observer out of town? I’m ashamed of myself for allowing that to happen. Not enough differentiation? Get away….. Nothing about the overall, long-term experience of learning in this teacher’s lessons is less than outstanding; it was the snap-shot observation process that was flawed.
- In the second year of our Departmental Review cycle we have kept the Line Managers the same as in the first. I was involved with Art (as featured here), History, Maths and DT – a good cross-section. After two years I now feel I know the teachers in these departments in a well-rounded sense; some I know so well that a one-off lesson observation couldn’t really change my view of their overall impact on learning outcomes. It could be dazzling or it could have some weaknesses but I know enough context to put that in the right perspective. I also now understand in some detail how progress and feedback are monitored over time, what excellent work looks like by the end of the course in each subject across the ability and age range and what the key areas of concern are around matching pedagogical developments to measurable outcomes. The point is that it has taken me two years to develop this knowledge… and now the observations just slot in to a big picture without being overly important in themselves.
I have also been keen to embed the thinking that underpinned my post ‘How do I know how good my teachers are?” into our formal processes more explicitly.
With these ideas in mind, we’ve been looking to develop what I call a ‘longitudinal’ process, that moves us far far away from the limits of snap-shot observations. Our most recent Middle Leaders meeting explored this issue and there seemed to be a few key tensions to absorb:
1. We want a highly developmental process where lesson feedback helps us to improve;…but we can’t meaningfully separate that entirely from accountability responsibilities.
2. Snap-shot processes are inherently limited but adding other elements and taking a more longitudinal view does also add to the level of scrutiny (albeit scrutiny that already exists). It’s an unavoidable double-edge: a chance to demonstrate the good work you are doing is also another moment of scrutiny.
3. Middle leaders are primarily interested and skilled in supporting in a collegial style but are also responsible for maintaining standards in their area – which must include securing improvement and tackling underperformance whenever issues are identified.
4. Student work, with the organic record of feedback, represents the best evidence of the routine practices of individual teachers and of the department in securing progress over time but work sampling can be cumbersome and is, de facto, a source of scrutiny pressure.
5. Subject specific processes allow for more subtle fine-tuning but there is a need for a fair and transparent process that allows standards to be consistent.
Thinking about all of this we’re suggesting that the best way to move forward is to develop the Departmental Review Process to include more formalised elements and here is the current proposal:
Most of these things are happening already; we just need to make sure that they happen consistently. The exact timing and sequence of these elements is flexible so departments can absorb them in a way that works for them.
There are two elements that need to be more explicit:
The Assessment, Feedback and Progress Review: Making sure the Line Manager and HoD engage in dialogue across the year about the nature of marking and feedback and how this leads to progress as shown in books, tests, pieces of work, folders – or wherever. Regular
engagement with this would avoid a cumbersome one-off collection process ; however it is done, line managers and HoDs need a good overview of the nature of students’ work in each class and how this is informed by the feedback dialogue. Line managers are obliged to educate themselves about the long-term outcomes in their subject areas…so that any observation has a broader context. It isn’t good enough to extrapolate from what you see in just one lesson…
Student Focus Group: This is for the HoD to organise or delegate, with a simple report-back to the department and line-manager on the key strengths identified and suggestions that students make. It’s a powerful source of constructive information that HoDs can manage with appropriate ground rules and so on. I’ve suggested this is biennial because that keeps it in proportion in terms of effort and information value.
There is also the opportunity for departments to re-draft the school’s OfSTED-referenced lesson observation template into a be-spoke departmental version that is more relevant for their purposes. In each of my line-management areas, I feel we’d get better value from a subject-specific set of observation criteria and that will be an area for us to develop in the coming year.
So, that’s where we are now. No pain no gain as they say…. but I’m optimistic that this approach, enabling the scrutineers to getter ever closer to the true picture of learning and achievement, will be successful. And if OfSTED ever do come… I will say, ‘Sorry, we don’t do that false snap-shot thing; it’s not good enough for us… we have a better way.’
Update January 2014: Today we decided to ditch lesson observation grades. With growing disquiet around grading and evidence from Professor Rob Coe amongst others, highlighting the flaws in the grading process, we’ve decided not to give them. Instead we will develop our use of feedback following lesson observations so that perceived strengths and areas for development are expressed clearly. For example, if a lessons are slightly less than ideal, the scale of improvement needs to be conveyed in a way that is different from when lessons are actually quite poor. With Lesson Study finding favour with more teachers, the snap-shot drop-in approach seems less and less satisfactory.