There is much consternation in the land this summer as secondary-school teachers and students discover that their GCSE exam results are lower than predicted. In English, some ten thousand students have a D grade instead of the C they were expecting, which could deny them sixth-form study. The government is committed to “driving up standards,” and the bar has been raised. Students are disappointed, and under the OFSTED rules, their schools and teachers have suddenly become less effective than the day before – which has serious implications.
The only good thing about this arbitrary nonsense is that it reveals the deception that lies at the heart of the exam machine. The problem begins as soon as the question paper is invented. It is impossible to decide in advance the precise scores that define the grades. I recall that one year – when I taught A-level maths – it emerged that A-grades were awarded on an unusually low score, and my students were baffled. It looked as if their efforts had gone unrewarded. But that year’s exam questions were much tougher than usual, so all the scores were lower. A certain proportion of students will be awarded A grades every year dependent on prevailing factors, such as previous practice and the availability of university places. The same goes for GCSE and other grades: the results are a function of the system, not of absolute merit.
Teachers and schools get to know what the exam boards want and play the system accordingly. Under New Labour, the government was keen to show that investment in schools paid off in better exam results and more students entering an expanded university system. So the exam boards were encouraged to bump up the pass rate. The questions were made easier, by multiple-choice formats and “spoon-feeding” right answers. Teachers even discovered in advance which topics would come up in the papers. The result was the “mickey-mouse” results once hailed as a vast step forward, and now so vigorously despised – since we have a government anxious to prove that it is raising standards. So the process is being reversed. For purely political reasons, the system is manipulated and students and schools pay the price.
These exams, therefore, are unreliable guides to individual pupil capability: what they actually reflect is the way the school functions as an exam-directed business. If a university really wants to assess the quality of a student’s thinking, it will conduct its own assessment, or interview the candidate. So the question arises: are exams like GCSE, or indeed A-level, worth all the hassle and expense? Certainly not, in the view of Dr Edwards Deming, the American management theorist who pointed out in the 1980s that in exercises of this kind, what is being assessed is not the students, but the system of which they are part. The schools work out what the exams want, and have to deliver it. With monsters like OFSTED on the prowl, they have no alternative.
It doesn’t have to be this way: Finland has dispensed with all public exams at 16-plus level and its students do very well. In France, students take a single exam – the Brevet – which they are all expected to pass. Its purpose is not to differentiate – simply to confirm that the school has done its job. Most students then go on to choose a baccalaureat course from a wide range of options, all of which retain a broad view of education around a specialist core.
What has all this got to do with slow education? Simply this: that it’s much more important for schools to help students achieve real understanding, than to memorise procedures and facts for exams that are political rather than educational. This huge assessment burden constrains teachers and students, distorts the curriculum, and discourages initiatives like slow learning. Nobody wins.