Rankings als HO-normering

Nieuws | de redactie
5 november 2007 | In Shanghai vergaderde vorige week de top van de rankingmakers. Alle dromen van en kritiek op de kwaliteitslijstjes over HO en R&D kwamen op tafel. Alex Usher, vice-president van het Educational Policy Institute, geeft de eerste analyse ervan. “A lot of the complaints one typically hears about university rankings are – in my view – fundamentally misplaced.  Whether directly or indirectly, rankings are pretty good at capturing the norms of the academic profession. If you have a problem with rankings, you actually probably have a problem with these norms.”


I had the pleasure of spending this week in Shanghai, China at the 3rd meeting of the International Rankings Expert Group. The meeting was hosted by the remarkable Professor Nian Cai Liu, who – without exaggeration – has more or less single-handedly revolutionized the way universities see themselves on the global stage through his Shanghai Jiao Tong World-Class Universities rankings.  With delegates from ore than twenty countries, the assembly provided a fascinating forum for tracking the explosive growth of rankings across the globe and the ways in which universities are striving for global excellence

Jamil Salmi of the World Bank set the tone for the event with an expert summary of the changes in rankings around the world.  According to his survey, the number of countries using some kind of university rankings nearly doubled between 2006 and 2007 – and it is rankings by government agencies, not media outlets, who are behind this charge. It seems now that any time a collection of indicators appears, people have an irresistible urge to aggregate and weight these indicators and turn them into rankings!

Despite the spread of rankings around the world, they remain deeply controversial.  They have been blamed for a host of ills including the increasing “marketization” of universities, the increasing homogenization of institutional missions and an array of irrational public policy and institutional policy decisions.  The Shanghai Jiao Tong Rankings of World-Class universities in particular, which are a largely research rankings, have been vilified for being insufficiently sensitive to various national and disciplinary traditions and overly positive about the American research university model.

Certainly, there is some truth to many of these charges.  But I’m going to let everyone in on a nasty little secret.  Although universities and academics publicly deplore rankings, the real problem is not that these rankings misrepresent the truth about which universities are best, but that they reveal too much about what the academic community really values.

Most rankings are fundamentally measuring – either directly or indirectly – institutional funding (Maclean’s is notably heavy on these kinds of measures) and research output (e.g. Shanghai Jiao Tong and other Asian rankings systems).  Many also measure something called “prestige” or “reputation” (the Times Higher Education Supplement world rankings is particularly heavy on this measure, but it is US News and World Report that gets a lot of the press on this one).  Some people say this highly unfair because it is “too subjective”.  But as recent research by Ross Williams of the Melbourne Institute and Gero Federkeil of the German Centre for Higher Education has shown, “reputation among academics”, properly measured,  is actually an excellent proxy for research output because of the high degree of correlation between reputation and bibliometric measures of scientific output.  So in fact, when rankings measure prestige, they are actually indirectly measuring research output.

When one thinks about it, this makes a lot of sense.  “Reputation” is the coin of academia.  But reputation – truly national or global reputations – can only be earned though scholarly communications, which are universally available, and non-rival in nature (i.e. an unlimited number of people can consume them).   There are no academics who are globally-known for their prowess in teaching, because teaching is always limited (there’s only so many seats in a class-room), and local.   And what goes for individual professors also goes for institutions, which after all are simply a bundle of individual professors.

Moreover, network effects exist in academia.  Bluntly, elite academics as a bunch are inherently snobby and prefer to work with other elite academics.  Thus, elite academics are not distributed randomly throughout academia but instead are highly concentrated.  And – surprise, surprise – money and talent are inextricably linked.  In many countries, research granting systems are specifically designed to make this link and of course the presence of “superstar researchers” makes fundraising much easier. So aggregations of good people attract lots of money (there are of course other issues at play here – notably age of institution – but this is relationship basically holds true).

Given this, it’s easy to see how rankings that measure money and research actually do a pretty good job of capturing the prejudices and norms of academia.  Maybe not in the middle ranges of universities where there isn’t a great deal to distinguish between universities, but certainly at the very top they would have to be extraordinarily badly designed not to capture these norms.  That’s why in American and world rankings Harvard is always at or near the top and why at Maclean’s it’s always either the University of Toronto or McGill which occupies the top spot.  Obviously it is possible to devise criteria which don’t put these institutions first; the Globe and Mail puts the University of Western Ontario first because it uses student satisfaction as a prime indicator.  While this is a commendable showing, absolutely nobody – on the basis of this ranking alone – is going to mistake Western for a World-class university because deep down no one actually believes student satisfaction is an appropriate indicator for academic greatness.

Therefore, a lot of the complaints one typically hears about university rankings are – in my view – fundamentally misplaced.  Whether directly or indirectly, rankings are pretty good at capturing the norms of the academic profession.  If you have a problem with rankings, you actually probably have a problem with these norms.

Don’t think it’s right that rankings privilege research-intensive institutions over teaching institutions? Well, maybe so, but academia values research over teaching.   Perplexed that rankings do not give sufficient credit to the extra work that poorer universities do to achieve the kinds of things their wealthier colleagues take for granted?  Too bad, because academia cares more about raw outputs than about value-added.  Outraged that English universities are privileged over other languages in research rankings because the top journals are all in English?  Well, academia abhors barriers to knowledge dissemination and nothing impedes understanding like the lack of a common language.

There is an intelligent case to be made, of course, for the development of indicators and rankings that do not privilege research, money and the English language.  Among other things, such indicators would probably be better for some of the main consumers of indicator data, such as students (who need data on undergraduate teaching to make good decisions on choice of institution) and governments (who would be quite interested in the value-added data).  That in turn would help people understood which institutions have particular strengths and which are doing an excellent job at fulfilling their mandates.

But no one should fool themselves into thinking that success by either of these yard-sticks makes for a world-class university.  Money and research still trump everything where “world-class-ness” is concerned. Still don’t think it’s fair that only big rich institutions get to the top?  That’s your right, of course. Just don’t blame the rankers for saying the opposite.  All they’re doing is holding a mirror to the norms of the academic profession.

Meer over dit thema vindt u op de speciale pagina gewijd aan ranking en studiekeuzeinformatie, onder Educatie  op ScienceGuide


«
Schrijf je in voor onze nieuwsbrief
ScienceGuide is bij wet verplicht je toestemming te vragen voor het gebruik van cookies.
Lees hier over ons cookiebeleid en klik op OK om akkoord te gaan
OK