Aussies love a good competition – seeing where our team ranks is one of the mainstays of our passion for many sports, indicating who is currently the best and who is at the bottom of the pile.
So why shouldn’t we rank schools to see how they are going too? Currently in NSW, school league tables are published by newspapers after the release of HSC results, which rank schools by the percentage of top bands students received at that school.
A school’s ‘success rate’ is calculated as the total number of top bands (the numerator) divided by the total number of examinations sat (the denominator).
These league tables and the criteria for a school’s ‘success rate’ have been created by the media using the only information available to them (that of the highest performing students), and the language surrounding them often mirrors that of sports teams, typically talking about ‘rising’ and ‘dropping’ in the rankings as well as ‘beating’ other schools.
Principals and teachers are under enormous pressure from school boards and parents to do well in these rankings as they are considered an indication to the community of how their school is performing (some principals are held to account, for instance, if their school does not appear in the Top 100); but this can promote dubious practices that do not necessarily foster sound educational values.
A school’s success rate can be manipulated by either increasing the numerator of the ‘success rate’ calculation, or by decreasing the denominator.
School rankings can, for example, be potentially improved by getting students to take lower levels of a subject (thereby increasing the number of top bands and so the numerator) when these students might well be capable of a higher level course, but less likely to get top bands.
Students who are not headed for a top band in an extension subject will improve their school’s rankings by dropping the subject altogether, reducing the 'denominator', which is not necessarily in the best interests of each individual student.
The way the rankings are calculated means that a top band in any course has the same effect (e.g. a Band 6 in a regular course and Band E4 in an extension course), no matter whether the course is standard, advanced or extension level, or whether it counts as one or two units.
If students have the aptitude to study Extension Mathematics, they are more likely to be capable of a Band 6 in standard level mathematics than advanced level mathematics, so there is a potential incentive for those concerned only with top grades to take the lower level of course – an educational anathema.
If we want Australia to compete on the world stage in areas such as STEM in the future, we need more students to take extension courses like mathematics to prepare them properly for these mathematically challenging degrees.
Sitting a Standard Mathematics exam in Year 12 is not adequate preparation for an engineering degree.
Moreover, students who get a Band 5 in a regular course or a Band E3 in an extension course (both indications of considerable academic performance and potential) lower their school’s success rate (by increasing the denominator but not the numerator) and subsequent ranking, and are therefore not always rightly considered as ‘successful’.
Furthermore, schools who encourage their non-ATAR students to sit HSC examinations, are scoring an own-goal when it comes to these rankings, since all these results count towards the denominator of their success rate but very few, if any, will count towards the numerator.
However, it may be beneficial for these students to sit some examinations, for their own self-esteem, future prospects and intellectual development.
Errors in the ranking calculation
Perhaps even more concerning than the flawed educational basis on which schools are ranked by the media in NSW, there are also mathematical errors in the way the rankings are calculated.
For example, HSC examinations that students sit externally (for instance if they study a foreign language remotely that is not offered by the school) count towards the school’s successes (numerator) if they get a top band, but are not counted as ‘examinations sat’ (denominator).
That’s just bad maths and definitely wouldn’t get you a top band.
In a hypothetical example, consider a school with 100 examinations sat, 15 of which are top bands.
If these examinations were taught and sat internally, the success rate would be 15%, equating to a ranking of 118 in 2023.
If 20 of these exams were sat externally, including 10 of the top bands, the success rate would be 15/80 = 18.75%, equating to a ranking of 82 in 2023, raising the ranking of the school by 36 places, despite the fact that the school played no part in this academic success.
In fact, the more students sitting external exams, the better a school’s success rate will be due to this mathematical error in the calculation (external courses having only a potentially positive effect on the numerator and always reducing the denominator).
In an extreme hypothetical situation, it would be possible to get an infinite success rate by having no courses taught internally at all and getting at least one top band externally.
The case for change
Education should be about the growth and learning of each individual student, rather than being a competition for who gets the best grades.
In fact, if you make education into a competition, half of all students and schools are by definition below average, no matter how well they are developing academically.
It is a zero-sum game, with some students’ and schools’ success coming at the expense of others’ lack of success.
It becomes virtually impossible to nurture all students in a climate where only the top grades are celebrated. Furthermore, it is inequitable and a ‘one-size-fits-all’ approach to measuring educational success.
Students, after all, enter a class at many different levels of aptitude.
These differences become greater as they progress through secondary school.
Students starting at a high level will generally go on to get high grades at the end of the year, and those at a low level will typically not be able to compete with them in the space of a single year no matter how hard they try.
It is no wonder then, that academically selective schools always dominate the top of the tables – their results may be more a reflection of the ability of the students that they select than superior quality of teaching and learning.
Sustaining motivation and effort in the face of poor grades is something that only the most academically buoyant students can achieve.
The majority of students who are judged to be ‘below average’ just give up in the face of competition for grades.
At the other end, high flying students who are rewarded for their academic achievement often become increasingly afraid of failure, selecting courses in which they feel high levels of competence, rather than those which may challenge them academically and in which they are not guaranteed top grades.
This may be good for their self- esteem and confidence in the short term (as well as potentially improving their school’s rankings), but educationally it is not necessarily honing their skills or pushing them to the ‘next level’ for the future.
How we can change
The success criteria used by media outlets in NSW league tables is not carefully or educationally thought out – it is largely a result of an act of Parliament prohibiting the release of any other data relating to students’ results apart from this information on top bands.
Parents and school boards have a right to know how their school is doing, but if schools are to be held account to educationally sound measures of success, we need to have a variety of indicators that assess the learning and growth of all students, not just the academically elite.
When we assess other intangibles, such as the economy, we employ a variety of indices, not just a single one.
For instance, university rankings are based on many different measures such as academic reputation, employer reputation, research output, employment outcomes, and sustainability.
Similarly, we need to allow the release of more information about school performance such as numbers of Band 5's, extension Band 3's, the median ATAR (a measure of how the mid academic level Year 12 student in each school is doing), as well as potentially using educational growth and/or other measures such as student and teacher wellbeing, student and parental satisfaction, staff retention, exclusion rates, reputation and so on.
League tables, if they are produced at all, need to be based on criteria that has been carefully considered by teams of educational experts rather than the media using a measure simply because is the only data available to them.
It is time for us to focus on things that actually reflect sound educational values rather than measures that cause untold damage to our education system.
Robin Nagy has taught mathematics and been a senior leader in secondary schools for over 20 years in the UK, Thailand and Australia. He is a director at Academic Profiles, a education consultancy body that analyses data for independent schools. Mr Nagy is a UNSW PhD candidate, investigating the optimisation of high school student's efforts.Do you have an idea for a story?
Email [email protected]