Europees HO-rankingsysteem op komst

Nieuws | de redactie
2 juni 2006 | In Berlijn heeft een internationale groep van wetenschappers en uitgevers richtlijnen geformuleerd voor het ranken van HO- instellingen: de “Berlin Principles on Ranking of Higher Education Institutions”. Robert Coelen, directeur van het International Office van de Universiteit Leiden, was bij deze conferentie aanwezig en benadrukt het belang van Europese samenwerking op dit gebied: “Je ziet nu dat Studiekeuze 123 een uitgebreide kopie wil zijn van het Duitse CHE-systeem. Waar we alleen niet heen moeten, is dat ieder land in Europa zijn eigen ratingsysteem heeft.

Vooral hier niet omdat we in Nederland universiteiten hebben die zich in Europa willen profileren en we hebben dan ook weinig baat er bij alleen te gaan staan in Europa. Ook heb ik er moeite mee dat er aardig wat opinie zit in Studiekeuze123, want hoe kun je deze überhaupt verifiëren? De Nederlandse student zit ingebed in een collectief geheugen over tientallen jaren. De reputatie van universiteiten in dat collectieve geheugen verandert niet zo snel. Tegelijk zegt dat beeld -en dus ook Studiekeuze123- weinig over hoe Nederlandse universiteiten zich verhouden tot instellingen in andere landen.” De Leidse universiteit organiseert op 21 juni een vervolgconferentie over dit thema. Daar zal ook de Vlaamse minister optreden en aangeven dat hij en de universiteiten aldaar met de Nederlandse collega’s willen instappen in het CHE- systeem. (Een eerder ScienceGuide-interview met CHE- directeur prof Detlev Müller-Böling leest u hier.)

Prof Marijk van der Wende (VU en UT/Cheps) analyseert aldus het belang van deze nieuwe ontwikkeling en het momentum achter de Leidse conferentie: “Deze principles voor good practice komen precies op tijd, we kunnen hier met ons Nederlandse initiatief richting ranking mooi op aansluiten. De leider van deze expertgroep, Dr. Jan Sadlac van UNESCO-CEPES, was ook spreker op het vorige ranking seminar in Leiden op 16 februari j.l., waar hij het belang van een goede methodologische aanpak reeds naar voren bracht. De principles zijn nu op internationaal niveau formeel vastgelegd. De principles benadrukken wederom de kracht en de kwaliteit van het Duitse CHE ranking systeem. CHE was nauw betrokken bij het totstandkomen van deze principes, zoals hun logo boven de principles laat zien. Dit bevestigt de juistheid van onze keuze om bij dit systeem aansluiting te zoeken.

Tijdens het volgende seminar, dat Leiden, VSNU, CWTS en CHEPS samen organiseren, zal worden besproken hoe Studiekeuze123 in de richting van het CHE-systeem kan worden uitgebouwd. Nadat Zwitserland en Oostenrijk zich reeds bij het Duitse initiatief aansloten, wordt hiermee de Europese uitbreiding van het CHE-systeem naar ons land een feit. Tevens zal de Vlaamse minister van onderwijs in Leiden aanwezig zijn om aan te geven dat ook het Vlaamse hoger onderwijs zich bij deze aanpak zal aansluiten. Net als bij accreditatie kunnen we op dit dossier dus ook samen met onze zuiderburen optrekken. De Europese Commissie heeft laten weten zeer positief tegenover het op deze wijze totstandkomen van een Europees ranking systeem te staan. Een projectsubsidie wordt momenteel overwogen.”

Lees de Berlin Principles hieronder.




Berlin Principles on Ranking of Higher Education Institutions

Rankings and league tables of higher education institutions (HEIs) and programs are a global phenomenon. They serve many purposes: they respond to demands from consumers for easily interpretable information on the standing of higher education institutions; they stimulate competition among them; they provide some of the rationale for allocation of funds; and they help differentiate among different types of institutions and different programs and disciplines. In addition, when correctly understood and interpreted, they contribute to the definition of “quality” of higher education institutions within a particular country, complementing the rigorous work conducted in the context of quality assessment and review performed by public and independent accrediting agencies. This is why rankings of HEIs have become part of the framework of national accountability and quality assurance processes, and why more nations are likely to see the development of rankings in the future. Given this trend, it is important that those producing rankings and league tables hold themselves accountable for quality in their own data collection, methodology, and dissemination.

In view of the above, the International Ranking Expert Group (IREG) was founded in 2004 by the UNESCO European Centre for Higher Education (UNESCO-CEPES) in Bucharest and the Institute for Higher Education Policy in Washington, DC. It is upon this initiative that IREG’s second meeting (Berlin, 18 to 20 May, 2006) has been convened to consider a set of principles of quality and good practice in HEI rankings

The Berlin Principles on Ranking of Higher Education Institutions.

It is expected that this initiative has set a framework for the elaboration and dissemination of rankings—whether they are national, regional, or global in scope—that ultimately will lead to a system of continuous improvement and refinement of the methodologies used to conduct these rankings. Given the heterogeneity of methodologies of rankings, these principles for good ranking practice will be useful for the improvement and evaluation of ranking.

Rankings and league tables should:

A) Purposes and Goals of Rankings

1. Be one of a number of diverse approaches to the assessment of higher education inputs, processes, and outputs. Rankings can provide comparative information and improved understanding of higher education, but should not be the main method for assessing what higher education is and does. Rankings provide a market-based perspective that can complement the work of government, accrediting authorities, and independent review agencies.

2. Be clear about their purpose and their target groups. Rankings have to be designed with due regard to their purpose. Indicators designed to meet a particular objective or to inform one target group may not be adequate for different purposes or target groups.

3. Recognize the diversity of institutions and take the different missions and goals of institutions into account. Quality measures for research-oriented institutions, for example, are quite different from those that are appropriate for institutions that provide broad access to underserved communities. Institutions that are being ranked and the experts that inform the ranking process should be consulted often.

4. Provide clarity about the range of information sources for rankings and the messages each source generates. The relevance of ranking results depends on the audiences receiving the information and the sources of that information (such as databases, students, professors, employers). Good practice would be to combine the different perspectives provided by those sources in order to get a more complete view of each higher education institution included in the ranking.

5. Specify the linguistic, cultural, economic, and historical contexts of the educational systems being ranked. International rankings in particular should be aware of possible biases and be precise about their objective. Not all nations or systems share the same values and beliefs about what constitutes “quality” in tertiary institutions, and ranking systems should not be devised to force such comparisons.




B) Design and Weighting of Indicators

6. Be transparent regarding the methodology used for creating the rankings . The choice of methods used to prepare rankings should be clear and unambiguous. This transparency should include the calculation of indicators as well as the origin of data.

7. Choose indicators according to their relevance and validity. The choice of data should be grounded in recognition of the ability of each measure to represent quality and academic and institutional strengths, and not availability of data. Be clear about why measures were included and what they are meant to represent.

8. Measure outcomes in preference to inputs whenever possible. Data on inputs are relevant as they reflect the general condition of a given establishment and are more frequently available. Measures of outcomes provide a more accurate assessment of the standing and/or quality of a given institution or program, and compilers of rankings should ensure that an appropriate balance is achieved.

9. Make the weights assigned to different indicators (if used) prominent and limit changes to them. Changes in weights make it difficult for consumers to discern whether an
institution’s or program’s status changed in the rankings due to an inherent difference or due to a methodological change.

C) Collection and Processing of Data

10. Pay due attention to ethical standards and the good practice recommendations articulated in these Principles. In order to assure the credibility of each ranking, those responsible for collecting and using data and undertaking on-site visits should be as objective and impartial as possible.

11. Use audited and verifiable data whenever possible. Such data have several advantages, including the fact that they have been accepted by institutions and that they are comparable and compatible across institutions.

12. Include data that are collected with proper procedures for scientific data collection. Data collected from an unrepresentative or skewed subset of students, faculty, or other parties may not accurately represent an institution or program and should be excluded.

13. Apply measures of quality assurance to ranking processes themselves. These processes should take note of the expertise that is being applied to evaluate institutions and use this knowledge to evaluate the ranking itself. Rankings should be learning systems continuously utilizing this expertise to develop methodology.

14. Apply organizational measures that enhance the credibility of rankings. These measures could include advisory or even supervisory bodies, preferably with some international participation.

D) Presentation of Ranking Results

15. Provide consumers with a clear understanding of all of the factors used to develop a ranking, and offer them a choice in how rankings are displayed . This way, the users of rankings would have a better understanding of the indicators that are used to rank institutions or programs. In addition, they should have some opportunity to make their own decisions about how these indicators should be weighted.

16. Be compiled in a way that eliminates or reduces errors in original data, and be organized and published in a way that errors and faults can be corrected. Institutions and the public should be informed about errors that have occurred.



Berlin, 20 May 2006








«
Schrijf je in voor onze nieuwsbrief
ScienceGuide is bij wet verplicht je toestemming te vragen voor het gebruik van cookies.
Lees hier over ons cookiebeleid en klik op OK om akkoord te gaan
OK