So, these kinds of tests must have the 'right' proportion of failures. It has to be built-in to the test, and into the lead-up to the tests - in other words into what we call 'education' (!).
To a certain extent, it really doesn't matter exactly what the knowledge is in these questions, so long as it can produce these failures. Now, of course, there's a whole lot of bluster and posturing that goes on about 'core knowledge', and the 'knowledge-based curriculum' and 'tradition' and the 'basics' to buttress up the idea that this core knowledge is essential. In fact, the more people in power can get the press to concentrate on this aspect of the matter, the less likely they will notice or care about the failure-machine embedded in such testing. So, all week we've heard Nick Gibb going on about 'standards' and the 'basics' as if knowing what 'fronted adverbial' is, is 'basic'. Whatever it is, it sure ain't 'basic'! But it works: the press buy the idea, and imagine that somehow such tests are 'finally' getting to grip with the 'huge problem' of England's 'low achievement'. This neatly overlooks the fact that any 'low achievement' is in fact constructed or produced by the tests themselves! To repeat: they are an essential part of their design.
The instrument by which the failure is brought about is through chopping up the knowledge (e.g. 'grammar' or e.g. 'maths') in such a way that only questions with right or wrong answers are asked or can be asked. This is a clear example of how the way we chop up knowledge becomes the knowledge itself. So knowledge isn't simply a 'what' or a 'body of facts' but a 'how', in that we are trained to think of knowledge as a series of right/wrong features.
As I've said before, this was made explicit in the reasoning behind why the SPaG/GPS test came in. Lord Bew in the Bew Report (2011) made clear that this test was suitable and appropriate for making schools 'accountable' precisely because Spelling, Punctuation and Grammar were subjects for which there were 'right and wrong answers'. It can not be repeated often enough: this is not true. The only way it can be made to be true is by squeezing and distorting the 'knowledge-base' (the facts) about grammar, spelling and punctuation in such a way as to make it possible to ask questions which produce right and wrong answers, according to the marking scheme.
When Nick Gibb fluffed his answer to Martha Kearney on the radio about identifying a word as a 'subordinate conjunction', he wasn't actually wrong in the broadest sense. He found himself in a field of indeterminacy and debate: linguists argue and dispute about how to name bits of grammar. So, it wasn't actually Gibb who was 'wrong', it was the test! Gibb exemplified exactly why such tests are wrong ie because there isn't a right/wrong answer to that question. Such a question - the one he was asked - is designed precisely to produce sufficient numbers of children who will get it 'wrong', even though there is no right or wrong answer. In other words, the knowledge base is distorted by the testing system in order to produce failure.
So when I've said in the past that these tests are 'about right/wrong', in a way I've over-complicated it. The core purpose is that they're about finding enough children to be 'wrong', to 'fail'. Any teacher looking at one of these tests knows which questions on them will produce the fails. It happened in yesterday's KS2 Maths, paper 2. Teachers know that though many of the children will have understood the concepts behind the questions, the wording of those questions would have guaranteed failure for many children.
Now, when you lock this 'fail-safe' system into a restructuring of education, then you have simply hijacked 'assessment' to do the job of changing schools from being local authority schools to academies. Children's guaranteed failure is the instrument through which academisation is done.
So, if you are a teacher or parent, and you see that look on a child's face, when they look sad, or disappointed or have a sense of 'un-worth' or when they are stressed and upset, these are all 'necessary' parts of a political project. The stress is political.
It's not us who have 'politicised' this stress. It's a necessary and essential part of a political programme to take schools out of our hands and give them (on 125 year leases) to sponsors - whoever they might be.
*high stakes - it's useful to use this expression to describe these centrally run, centrally directed mass tests. The question here is whether we think governments have the right to do this in a child's school career once (e.g. at 18), twice, or, as some favour, many times. There is an argument for saying that once you've introduced high stakes testing, then it is inevitable that these will be summative (see below) and norm-referenced (see below). Once that apparatus is in place, and the high stakes tests are frequent, you have in fact determined the curriculum, the nature of teaching, the child's experience of education, teacher-training, parent-expectation. This makes the matter of how often there are high stakes tests absolutely critical. Put another way, how much teacher-pupil contact time (lessons) can be devoted to education that is not immediately or directly in the grip of high stakes testing?
**summative - as the name implies, are tests which supposedly sum up, and test a given chunk of knowledge in a kind of do-or-die sort of a way ie with no correction, debate, dialogue. More often than not, they are not 'diagnostic'. That's to say they are not designed to help the learner to discover how to do better or what's gone wrong, because the test comes at the end of a bit of learning. It's too late for helping the learner. One key alternative to summative testing, is 'formative' testing, which in its various types could or should involve learners and teachers making assessments of how both could improve what they're doing.
***norm-referenced - this means the kind of test where the results are 'plotted' on a graph against a 'norm'. Either before the children take the test or after a line is drawn across the distribution of marks which says this is the 'norm' - a kind of 'pass-mark', if you like. All marks are then set against this. In other words, your final mark is not about whether you have 'attained' or 'learned a given amount of knowledge' but whether you hit the 'norm' or are above or below it. A key attainment test we all know about is the 'driving test'. Now imagine passing the driving test, and the examiner takes your results back to central office, where they decide that 'too many' people have passed it this month, so it's been decided that you haven't passed! That's norm referencing. The alternative to this is'criterion referencing' which does indeed rest on attainment: has the candidate learned/done what was required of him/her?
Note: governments all over the world very rarely openly admit that their high stakes testing is fully and wholeheartedly norm-referenced. For them to do so, would then educate the press and everyone else to the fact that the world's exam systems are in fact designed to produce a percentage of failures who will then be not entitled to more education. What's more, the fact that this is pre-judged before candidates go into the exam halls, detracts from the whole illusion that it's possible for 'everyone' to succeed. So 'anyone might succeed' is inflated into 'everyone can succeed' when quite clearly the system is rigged and sustained on the basis that 'everyone can not and must not succeed.'