Our annual exam results uproar

Checking A Level results
Checking A level results. Photo by gpointstudio via freepik (CC BY 2.0)

As usual, August has brought great debate about exam results in the press. With Parliament in recess and not much political news, a story which affects hundreds of thousands of families is always popular. Media outlets like photos of pretty young women. Exam results provide an ideal opportunity to show them excitedly celebrating their successes.

Every year we have the same debates. Are standards or achievement rising or falling? Should we be using more exams or more coursework? Is the system fair to boys and girls, to students from particular backgrounds, regions or type of school?  Bizarrely, this year we are considering whether to replace grades A* to E with numbers 1-9.

The Covid years are special

In the last two summers, the Covid pandemic has given a special edge to these debates. Every student has had their education severely disrupted, although it has been much worse for some than for others. The gap between private schools and comprehensives has grown, and pupils eligible for free schools meals have fallen further behind.

Government policies and instructions have changed, sometimes at the last minute. Schools and teachers have had to take on new roles in assessment, and assessment has been done differently in different places. There has been concern about whether the system is fair, and about ‘grade inflation’. Some people believe that too many candidates, especially in some schools, are getting high grades.

Are we measuring the right things?

One reason we always have the same debates is that the system is designed to achieve a variety of, sometimes conflicting, aims. But we rarely ask why we are doing this, and what these exams are really for.

Different subjects have different requirements. There are some subjects where exams test very specific skills: can a student do x to an acceptable, measurable standard? This matters especially in some vocational fields. But many are testing more generic qualities. Does the student have a broadly appropriate range of knowledge, and the ability to assemble an argument based on that knowledge?

Are exams the best way of measuring?

There are also debates about how we want to measure attainment. The ability to use a body of knowledge to write an essay under exam conditions in 3 hours is a very specific skill. It shows that the candidate has acquired (some of) that knowledge, and can organise and write about it clearly under time pressure. It is a test of memory, discipline, and writing ability. It also shows that each individual can do this without anyone’s help. But there is also an element of luck. How well was the student feeling on the day, and how far the questions on the paper match happen to what she or he knows best.

Traditional exam skills matter in the real world. But, in the real world, people also need to be able to do these things over a longer period, with access to libraries and the internet and as members of a team. Those are different, but equally important, skills. So, there is a case for assessing coursework, and for exams taken over several days. But if students can take the questions home to do the research, some will have more support, and how do we know it is that student’s own work we are assessing?

What is it all for?

This takes us to a more fundamental question. What do we want these results for? When the Schools Minister tells the BBC “Today” programme that “exams are the best way of assessing a student’s attainment”, what does he mean? 

Exam results insecurity
Exam results insecurity. Photo by Hello I’m Nik at Unsplash under Creative Commons licence.


The obvious answer is, to measure what people know and can do. For teachers at the next stage, or for employers, this matters. It is frustrating for teachers and students when half of a university class knows something and the other half doesn’t. It can be dangerous if a new employee does not have the specific skills or knowledge required for the job. But it is often difficult to find good measures of what a student knows and can do. The ability to write an essay about a certain skill is not always a test of the ability to perform that skill.

Relative position

There is a second, very different, purpose: which is ranking. Regardless of precisely what skills and knowledge a given grade means, universities and employment have only a limited number of opportunities open.  So they need to be able to identify who is best qualified. A candidate with three As at A level is more likely than one with three Cs to be successful on a university course. So, it is arguably more important to be able to put candidates in order, than to measure precisely what they know and can do. For that reason, exam grades are traditionally adjusted, or ‘norm referenced’. Roughly the same proportion of students get a particular grade each year, so the same performance may get a B in one year but a C in another.


A third purpose is to motivate students, teachers and managers. Competition is a powerful motivator, and most of us take work more seriously when we believe that someone else is going to measure it. When a student or teacher is motivated by a real passion for the subject this may not be true, but few have a passion for all the subjects they study, and for many people the temptation to relax is real. But, if the exam motivation is too strong, and we end up ‘teaching to the test’, we have missed the point.

System performance

 A final purpose is to measure the system. Are some schools, teachers or groups of students performing better than others? How is performance changing over time? Managers and policymakers need to know whether particular institutions, or kinds of student, are doing less well than others. Exam results can help to identify problems, and put things right.

Exams are especially important measures of the performance of schools — parents fight to get their children into “good” schools. Exam results offer an easy way to judge this. Although there are many things that matter in the school experience, most are difficult to measure. So because exams give us simple numbers, we often treat them as the most important.

These purposes are not all compatible

All these purposes are important, but they point us in different directions.  National media debate usually fails to recognise the complexity and tensions.

In some subjects we need to test very specific skills and knowledge but that is not true for all.

Formal exams and coursework assessment measure different things. Deciding that one is more important than another is a political choice. There is no right answer, but the decision will always favour some kinds of student, knowledge and skill.

Grade inflation matters because ranking matters. From the point of view of progression and employment, it is pointless to celebrate more people getting A* at A level. Universities and employers choose the best, using whatever measures are available to them, and exam results are often the main tool used.

Giving heavy weight to teacher assessment changes the tasks of the teacher, and risks discrimination against or for particular students. And if teacher assessment is used to measure teacher or school performance, it is hard to resist the temptation to inflate assessments. This is more likely to be giving marginal students the benefit of the doubt, than deliberate playing of the system. But the results will be different, with real impact on the reputation of the school.  

On all these issues there is balance to be struck between costs and benefits. The more objective and independent an assessment is of the school, the more expensive the system is to administer. Spending more on examinations may mean spending less of teaching: where do we draw the line.

A Level results uproar
A Level results uproar. Photo by Hammersmith &Fulham Council on Flickr under CC licence

What have we learned from the Covid experiment?

So, despite the annual media frenzy, we should not treat exam results as if they were complete and objective measures of the attainment of students, teachers, schools or the system. And we should not rush to draw conclusions from changes from one year to the next.

All the students who have experienced the last two years of disrupted education deserve to be celebrated, for their survival and their achievement. Covid has forced us to carry out an unintended experiment in education. We should consider what we have learned about what works, and what doesn’t.

But, we should also be careful. We should take this opportunity to consider what we actually expect from our education system, and not rush to change the rules without a thorough analysis of what we really want to achieve.

We might also want to consider that some countries seem to do well with fewer exams altogether!

Can you help us reach more readers?