Europa moet durven ranken

Nieuws | de redactie
27 april 2009 | Europa moet zorgen voor een hoogwaardige kwaliteit van rankings. Commissaris Potocnik steekt voor ‘na Leuven’  zijn nek uit met een pleidooi voor goede, multidimensionale rankings die zowel de Times HE als de Shanghai Jiao Tong overbodig moeten maken. If higher education is an engine of the economy, then productivity, quality and status of research produced by universities is a vital indicator. I think it is time that rankings assume a more inclusive notion of research to cover all kinds of research; ranging from basic –curiosity led research to more commercial and practice-based research.”

A topic that is increasingly being discussed in European circles is about how to evaluate and compare universities’ performance. We have recently asked a group of experts to advise us on this subject. Professor Wolfgang Mackiewicz of the Free University of Berlin, the chair of this expert group, presented the group’s interim report at a workshop I had the pleasure of attending today.

Despite its very complicated sounding name – the Multidimensional Assessment of University-based Research – the group’s report refers to something commonplace – the use of rankings and how to best measure the excellence of university research. It was fascinating to see how sophisticated ranking methodologies are becoming and also to find out that rankings really are a little science in itself! I was pleased to see the variety of guests who participated in the workshop. It convinced me how popular and useful rankings are as a tool for many different users. At the same time, it made me understand the crucial importance of making these tools as precise and accurate as possible.

Over the last couple of months, the Expert Group has dealt with precisely this question – how to create a new and more coherent ranking methodology. The idea for such a project on rankings stems from a series of recommendations by the Commission following our 2006 Communication on Delivering on the modernisation agenda for universities. This communication suggests that universities should become more specialized and should differentiate themselves according to their own strengths. Diversifying will increase the excellence of European higher institutions and will also make them more competitive on a European and global scale – but this creates a problem when we try to measure and compare excellence among different institutions. The issue becomes even more difficult because apart from becoming more diverse, universities are also becoming more multifunctional, taking on roles and responsibilities beyond their traditional teaching and research roles, embarking on innovation projects, management, community engagement and others.

All this is good but for the same reason some might begin to wonder whether rankings truly are important enough to be worth all this trouble. One might ask if it is at all possible to measure the excellence of such diverse universities and their research activities. How, if at all, is it possible to determine objectively what the best university and what the most quality research really is?

Even if the answer here is unclear, the participants at today’s workshop agree that there is a definite need to revise and improve the existing methodologies. Ranking schemes today, such as the ones published by the Shangai [Jiao Tong Academic] Ranking of World Universities or the Times [QS World] University Ranking contain many biases and some people have even claimed they cause more harm than good, and propose moratoriums on their use! I agree we should be careful with the use and interpretation of rankings but I also think that we shouldn’t throw out the baby with the bathwater and do away with rankings all together! The reality is that rankings are useful and to focus on their shortcomings isn’t enough.

As I said, I fully agree with my colleagues in the Expert Group that existing ranking methodologies don’t represent the diverse and multifunctional nature of universities and their research activities accurately. They also do not take into consideration the different categories of users which use rankings for their specific purposes – students use it to shortlist their choice of universities; academics to support their own professional reputation and status; public and private funders find rankings useful in guiding their decisions about funding allocations and even politicians seem to refer to them often as a measure of national economic achievement or aspirations. In view of all these reasons the Expert Group was mandated to prepare a more comprehensive – multidimensional assessment approach to rankings. This approach identifies the various users of rankings and makes use of a sort of “tool kit” to map out the users’ needs. The multidimensional approach also considers the variety of university disciplines and research paths through which it tries to overcome the shortcomings of the existing methodologies.
I like to say that if higher education is an engine of the economy, then productivity, quality and status of research produced by universities is a vital indicator. I think it is time that rankings assume a more inclusive notion of research to cover all kinds of research; ranging from basic –curiosity led research to more commercial and practice-based research.

Making rankings more coherent may also have a stimulating effect on the funding of university research. One may expect rankings to have an impact on how universities are funded in the future, so we’d better make them good and more sophisticated than they are today. A recent study on the Impact of external project based research funding on financial management of universities conducted for the Commission found that universities increasingly have to diversify their funding streams and sources in order to fully support their research activities. Rankings, which have been shown to influence the behaviour of universities and promote strategic thinking and planning, could help universities to develop better management practices and thus attract potential external funders. This is in the end what we are trying to do – secure the future for research activities in universities which will benefit our societies for a long time to come…

For me, the message from the workshop is clear. Today there is no single, correct methodology. It is our task to help develop methodologies which are fit for purpose and are suitable for a range of disciplines as well as interdisciplinary research use. However, we should be careful. The intention of evaluation is certainly not to create an ‘audit society’ in which indicators become an end in itself. Otherwise, we run into the risk of loosing sight of the essential role which universities and research play in the wider society and culture. We should encourage more discussion to bridge the sometimes artificial gap between education and research; and management, funding and policy on the other hand. This is obviously part of a much wider and no less important debate.

[bijdrage van Potocnik in zijn nieuwe blog, dat u vindt op]

Schrijf je in voor onze nieuwsbrief
ScienceGuide is bij wet verplicht je toestemming te vragen voor het gebruik van cookies.
Lees hier over ons cookiebeleid en klik op OK om akkoord te gaan