Wat is impact rankings?

Nieuws | de redactie
13 november 2007 | Professor Ellen Hazelkorn (director and dean Dublin Institute of Technology) is de auteur van een OESO-studie naar de betekenis en effecten van rankings. "The pace of higher education reform is likely to quicken in the belief that more elite, competitive and better institutions are equivalent to being higher ranked. But what are the costs?"  Haar beschouwing daarover leest u hier.

Few people in higher education today are unaware of university rankings. Criticised and lampooned by many, their increasing popularity and notoriety is a reflection of the absence of publically available ‘consumer’ information for students, parents and other stakeholders about higher education institutions. But rankings are also a response to the growing global competition for ‘good’ students, academic staff, finance, PhD students and researchers, and to calls for greater accountability and quality assurance. Undoubtedly, part of their credibility derives from their simplicity and the fact that they are independent of the higher education sector or individual universities.

Despite the existence of 17,000 higher education institutions worldwide, there is now a near- obsession with the status and trajectory of the top 100. Over recent decades, rankings or ‘league tables’ have become a feature of many countries. They are usually published by government and accreditation agencies, higher education, research and commercial organisations or the popular media. As higher education has become globalised and internationalised, worldwide rankings have appeared, such as the Shanghai Jiao Tong list and that produced by the THES. The former has effectively become the brand leader, regularly referenced by university leaders and government ministers.

Rankings differ from classification systems, such as the US Carnegie Classification System, which provide a typology of institutions according to mission and type. Rankings use weighted indicators or metrics to measure higher education activity. The aggregate scores are expressed as a single digit, and institutions are then ranked with the ‘best’ performer given the lowest number. Data is primarily drawn from statistics, publicly available information such as teaching quality or research assessments, or questionnaires and feed-back from students, peers or selected opinion-formers.

Initially, criticism focused on technical and methodological aspects – choice of indicators, the weighting attributed to them, and their use as quality ‘proxies’. They are often accused of bias towards science, biomedical and technology disciplines, English-language publications, and traditional research outputs and formats. Due to the paucity of comparable data for teaching and learning and service/third-mission activities, worldwide rankings, in particular, are over-reliant on research data and peer review – giving rise to their particular focus. Recent criticism has highlighted their impact and influence. Can the ‘report card’ approach measure the full range of institutional activity or does it impose a ‘one-size-fits-all’ definition or ‘norm’ on higher education? Are rankings encouraging institutions to become what is measured? Because newer universities consistently rank lower than ‘older’ more well- established and well-endowed universities, are ‘elite’ institutions caught in a virtuous cycle of cumulative advantage while other institutions become relatively poorer? What is the likely impact on other policy objectives, such as ‘third mission’ activities, widening access and increasing institutional diversity?

To better understand rankings, an international study was conducted last year in association with the OECD Programme on Institutional Management of Higher Education and the International Association of Universities. It asked how institutions were responding to rankings, and what impact or influence rankings were having on decision-making. Leaders from 202 higher education institutions in 41 countries participated, spread relatively evenly across well-established and new, and teaching intensive, research-informed and research intensive institutes.

University leaders said they believe rankings helped maintain and build institutional position and reputation, good students used rankings to ‘shortlist’ university choice, especially at the postgraduate level, and key stakeholders used rankings to influence their own decisions about accreditation, funding, sponsorship and employee recruitment. These and other benefits are seen to flow directly from high ranking, while the reverse is also believed to be true. Over half the leaders said they were unhappy with their current position: 70% want to be in the top 10% nationally, and 71% want to be in the top 25% internationally. The majority have a formal process to review the results, usually by the president/rector, and are taking strategic, organisational, managerial and/or academic actions. These include embedding rankings in strategic decision-making processes and ‘target agreements’ with faculties, establishing a ‘new section to monitor rankings’, ‘developing better management tools’, and providing ‘more scholarships and staff appointments’. In general, universities are ensuring that senior staff are well briefed on the significance of improving performance. Some mentioned mergers with other institutions to boost position or shifting resources from teaching to research.

Rankings also influence national and international partnerships and collaborations. Leaders say they consider a potential partner’s rank prior to entering into discussion about research and academic programmes. In addition, rankings influence the willingness of others to partner with them or support their membership of academic/professional associations. This international experience is replicated in the growing international literature and journalistic commentary. Because they fulfil particular needs, rankings have gained popularity. Accordingly, initial ‘concerns’ were ignored with reference to an institution’s (poor) score or broader objectives, such as a need for benchmarking or strategic planning. Today, there is wider acceptance that rankings are influencing higher education decision-making, stakeholder behaviour, and government policy-making.

Rankings are both a manifestation and an engine of the competitive global higher education market. Because of the close correlation between rankings and reputation, governments are using rankings as a policy instrument while institutions are using rankings as a management tool. Institutions at all levels in the selectivity game are devoting resources to activities related to improving institutional position, including recruiting students who will be assets in terms of maintaining and enhancing rank. Excellence initiatives in Germany, Russia, China and France are policy responses to rankings. The pace of higher education reform is likely to quicken in the belief that more elite, competitive and better institutions are equivalent to being higher ranked.

But what are the costs? Rankings inflate the academic arms race, locking institutions and governments into a continual quest for ever increasing resources. Alex Usher, in a 2006 article titled “Can our schools become world class?” in the The Globe and Mail, wrote that a significant world-class university is a $1b-a-year operation which needs to increase its overall funding by at least 40%. Very few societies or public institutions can afford this level of investment. Rankings are propelling a growing gap between elite and mass higher education, with greater institutional stratification and research concentration. Institutions which do not meet the criteria or do not have ‘brand recognition’ will effectively be under or de-valued.

Despite protest and criticism, there is now a strong realisation that some form of national and international comparators is useful, inevitable and here to stay. They are equated in the minds of students, their parents and other key stakeholders with excellence, and are now a significant factor shaping institutional reputation. Higher education needs to learn to live with them. There are also big policy implications, and a role for educating public opinion.

Various international initiatives, including by the OECD, the EU, and the International Rankings Expert Group, which published the Berlin Principles in 2006, are responding to these challenges. The OECD, for example, is examining how the full range of activities which diverse institutions engage in, notably teaching and learning, should be measured. The key questions are how should the quality and performance of institutions be defined and measured, by whom and for what purpose?

The next phase of the OECD investigation, under the auspices of the Institute of Higher Education Policy (www.ihep.org), with funding from Lumina Foundation, will involve case studies of institutions in Germany, Australia and Japan where senior leaders, academic staff and students will be interviewed. Ultimately, it is critical to understand how an arguably innocuous consumer concept has been transformed into a policy instrument, with wide ranging, intentional and unintentional, consequences for higher education and society.

[Uit: Higher Education Management and Policy, publicatie van IMHE bij de OECD]  



«
Schrijf je in voor onze nieuwsbrief
ScienceGuide is bij wet verplicht je toestemming te vragen voor het gebruik van cookies.
Lees hier over ons cookiebeleid en klik op OK om akkoord te gaan
OK