Our new marking policy makes me feel stupid

When OFSTED came it was a bruising and unpleasant experience. This was particularly so for the Maths department. We were singled out for criticism because the Maths results were not as good as the English results. Enough about that because that’s a subject for another post/rant. What’s relevant to this post is that they said the quality of marking/record keeping and the systems for demonstrating progress was inconsistent across the school.

This resulted in our new “one size fits all” marking policy.

Before I continue I should say that this is not intended to be a rant about SLT. I like my current leadership team. They have been very nice to me. They run the school pretty well and are generally competent and well intentioned. Whenever I have had personal problems I have got a very human and sympathetic response. I would say that there are 3 members of SLT that are excellent. The rest are what I would call “a safe pair of hands” in most regards (damning with faint praise perhaps but then I would describe myself similarly).

I also do not completely hate the marking policy. It suits some departments very well. It is a genuine attempt to reduce workload and for us in some regards it has (Ks4 for example) in Maths. Some departments say it has made an enormous difference to their workload.

I’m only going to write about the problematic bits of the policy as it applies to Maths.

There are several problematic areas for me. The first is the sheet on which the pupils are supposed to record the evidence of their progress. Imagine an APP grid on which pupils are supposed to record what they can and can’t do and write where the evidence that they can do these topics can be found. On these grids they are supposed to record their test results, transfer or summarise my written diagnostic feedback from their book to the record sheet and RAG each topic.

My first issue with this is the time consuming pointlessness of most diagnostic marking in Maths (blogged about here: https://mylifeasacynicalteacher.wordpress.com/2014/01/31/book-scrutiny-once-again-shows-that-i-dont-do-much-of-things-i-think-are-pointless/ )

This is compounded by expecting pupils to copy this feedback onto a separate sheet (although it does, I suppose, ensure that they have looked at it).

The second issue with this is pupils recording with RAG what they can and can’t do. The problem is that this is not set in stone and is fluid over time. Let’s say for the sake of argument that all pupils do their best to fill this in properly. If they record something in green at the time of learning it there is no reason to assume they can still do it when they come to revise for a test. There is no reason to assume they will still be able to do it when the inevitable learning walk or book scrutiny comes round. In my experience there is no compelling reason with a lot of pupils to assume they will still be able to do it next lesson.

The Head of Maths to his credit has decided that the best way to address this is to test pupils on a topic a few weeks after the topic has been taught and fill the grids in on the basis of those tests. This does rather go against the intended workload reduction element of the policy and does not address all  the problems but it’s better than nothing. SLT said that 1 question can constitute an assessment. This my well be true in some subjects but personally I would find giving  meaningful grade or level on the basis of one maths question problematic.

Different people in the department are filling in these grids in slightly different ways so we have had several meetings to try to get some sort of consistency across the department.

My classes end up with a grid containing:

  1. A list of the topics they have studied this year broken down by term- useful
  2. Where a revision resource for that topic can be found- useful
  3. Where the evidence that the pupil can do this topic can be found along with my feedback- not sure what the point of this is
  4. RAG for each topic- merits of this are dubious in my opinion.

In the meeting I asked who these grids were supposed to be for. Initially I was told that they were to benefit us as teachers. When I queried this I was told that the grids would make it easier for teachers to show all the good practice they are doing when SLT or inspectors observe them. I pointed out that if the purpose of this is to benefit me then I ought to be able to opt out if I can’t see or don’t understand these benefits. Apparently not. I asked if I could opt out if I don’t really care about showing observers my amazing practice. No. I asked if I could opt out if I felt that the benefits were significantly outweighed by the extra work. No.

The meeting then moved on to how these grids benefit the pupils. To be perfectly honest I didn’t understand this bit either. I understand why having a list of topics is beneficial. I understand why having where to find revision materials would be useful. I have been through the arguments for why it’s beneficial for pupils to fill in the rest of the grid several times now. I still don’t get it.

I asked what the minimum effort I could possibly put into grids without anyone hassling me is. I was informed that if I did the minimum it would look bad when I was observed as I would inevitably be compared to people doing their best. I did not get an answer though.

In my experience of using this a significant minority of pupils struggle to read, understand or remember what the topics are based on the list. This means they either don’t fill it in correctly or it requires a significant amount of my time to get them to fill it in properly. The pupils could be spending this time learning something. On top of that I have checked pupils understanding of previous topics during lesson starters and found what they can and can’t do does not match what they have recorded.

Only a tiny proportion of pupils appear to be benefiting at all from this. It takes up quite a lot of lesson time for very little benefit as far as I can see. It is also deeply tedious and results in unnecessary confrontations with pupils as they have to be made to fill it in (and often made to do it again when they yet again fail to do it properly). Despite my efforts lots of pupils have not filled it in properly.

The obvious beneficiaries of this new system are observers. They can look at the front of any book and see pages of assessment results and diagnostic feedback. This seems to be used as a proxy by which whether people are doing their jobs properly are judged.  They can also see whether teachers are doing what they’re told or not. One box given to them by OFSTED can be ticked.

Every time I get feedback from a learning walk or observation the grid not being filled in properly is one of the main opportunities for improvement. I’m doing it wrong. I can’t do it right because I don’t understand the point of it. Having used it I understand it even worse than I did when it was hypothetical because it quite clearly is not doing what it is supposed to.

The people pushing this policy are intelligent people. Maybe my inability to understand this marking policy means I’m not as smart as I previously thought.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s