Wanted: Solid and sophisticated information system
Comparable information on higher education institutions and study programmes are of great importance for students and academics. Existing approaches to information systems are often national and therefore not useful for mobility purposes. However, the development of a common European Higher Education Area led to an increase of student- and staff mobility which resulted in an increase in the need for comparable international data. Consequently, a pilot project started in 2006 on extending the CHE Ranking to HEI’s in the Netherlands and Flanders. The underlying aim was to develop improved methodologies for international rankings. On October 11th CHEPS, CHE, CWTS and the Ministry of Education and Training of the Flemish Community of Belgium organised a conference on the results of the pilot. In this light, it seems important to elucidate the view of the students on the most important elements of the conference and on European rankings in general.
Before the criteria for a solid international ranking will be assessed, it is necessary to clear the air first by drawing a distinction between traditional rankings and rankings like the Dutch Studiekeuze123 and the German CHE. Traditional rankings like Shanghai Jiao Tong and the Times Higher Education Supplement are no valid evaluation of the quality of institutions or study programmes. They are often research-oriented, do not differentiate between subject areas and do not provide information on teaching quality. Additionally, this type of ranking does not cover the European Higher Education Area and is often in favour of English speaking nations. As a result, traditional rankings suffer from severe methodological weaknesses to function as a study choice tool and is therefore of less interest to students. SK123 and CHE do differentiate between subject areas, provide information on facilities and services and leave the relevance of the (quality) indicator to the user. This means that the aim that these rankings are wishing to accomplish is different from traditional rankings. Traditional rankings produce lists of higher education institutions while SK123 and CHE have the aim to function as a information system.
As a result, it seems logical that tensions arise between traditional rankings and information systems. Both types of ranking approaches vary extensively and have totally different objectives, but both operate under the same umbrella-term ranking. Rankings in higher education are often controversial and heavily contested and this negative association with the term ranking consequently has a negative impact on the concept of student information systems. Even if one considers that both wish to pursue different objectives and attract diverse audiences. Subsequently, as these information system functions as one of the instruments which will eventually lead to a match between the student and the study programme, it was argued during the conference that is vital to change the name of ranking for student information systems to matching. However, I think that this term is not ideal due to the fact that the actual match occurs in a much later stadium. In other words these information systems solely serve as the basis or orientation for matching.
In addition it is important to highlight another issue. Comparisons of quality, even incorporated in student information systems, frequently receive the criticism of presenting some type of ranking. So the question arises whether it is possible to organise comparisons without presenting hierarchies in the results? The answer is no. Even if ranking is not the intended aim, any form of classification of quality leads to some kind of hierarchical interpretations. It would be an illusion to think that elimination of these types of interpretations is a realistic prospect. However this should not be a reason for hesitation.
After setting these concerns straight, the following question needs consideration: What are the lessons that can be learned from the pilot project? In order to answer the question, some reflection is needed on response rates and other results of the pilot. First of all, response rates were low. The idea was to incorporate 12 Dutch and 12 Flemish study programmes in the project. In the end it became apparent that only 7 Dutch and 10 Flemish programmes cooperated. Only a marginal 7 percent of the Dutch students and 10 percent of the Flemish participated in the project. A possible reason for a lower percentage in the Netherlands is because of the competition with the national information system SK123. In Flanders there is no such system.
As already been said, the pilot project was directed towards the development of accurate and applicable methodologies for linking SK123 and CHE. It is the first step towards an international information system. It appeared that a full-scale link between SK123 and CHE encountered some difficulties. Several indicators were missing in SK123 which were present in CHE and the other way around. The following indicators were missing in SK123: double degrees, lecturers who speak a foreign language, research performance, information on foreign staff and the gender balance of the staff. In CHE information on full and/or part time studies was missing. But also future income, feasibility of the study and required previous education were not an integral part of the CHE ranking.
An important erroneous element in the pilot is the student satisfaction. The system makes use of the Green-Yellow-Red principle. Green stands for above average, yellow for average and red for below average. In the German CHE the Green-Yellow-Red principle gave the follow picture: 26 percent is green, about 50 percent is yellow and 24 percent is red. Although exact numbers were not provided during the conference, it was said that the percentage which is green is much lower in the Dutch SK123, yellow was about the same and red is much higher than 24 percent. On the basis of these results one could argue that German programmes are better than Dutch programmes. But is essential to ask the question whether this is true. And if not, what do these numbers really tell us? First, Don Westerheiden argued that people should keep in mind that the grading scales in Germany differ extensively from the grading scales in the Netherlands. Germany uses a grading scale of 1 to 6, in which 1 represents the highest score and 6 the lowest. In the Netherlands, a scale of 1 to 10 is used in which 10 represents the best. This leads to the fact that a 5 or a 6 in Germany would be seen as a failure, while in the Netherlands 5 to 1 is seen as insufficient, which might have influence on manner in which is graded. In addition, it was argued at the conference that the expectation levels of students differ among nationalities. Dutch students appeared to be more demanding regarding their study programmes. Hence, it is impossible to state that Dutch programmes are of poorer quality than German. It only signifies that the results are deformed due to the fact that the methodologies are not robust enough yet. Further research on designing these kinds of tools is necessary.
And important part of the pilot and therefore also of the conference was to find out what student information systems should measure. What are the required classifications and methodologies? Foremost it was argued that an important prerequisite is that the typology must contribute to the needs of the different stakeholders: students, academics, businesses and higher education bodies. It must enhance system transparency and show comparable data. In order to do so it must be clear which classification system is used. In other words, the qualification system is dependent upon the type of study programme. Frans van Vught suggested classifying Bachelor, Master and Research programmes differently. Additionally I think it is important to classify the institutions too. It makes absolutely no sense to compare for example a traditional research university with highly selective admission policies like Oxford University to a university of applied sciences with a strong focus on widening access to higher education and city development like Hogeschool Rotterdam. Both belong to totally different types of institutions. What we need is a multidimensional approach in which the nature of the institution as well as the nature of the study programme is publicized. Only then a stable and strong fundament is developed for designing a ranking system that makes sense.
Although the next subject was not part of the debate during the conference on October 11th, I think it is necessary to elaborate on the position of the hogescholen as an integral part of an international information system. At this point the data of hogescholen is not as developed and complete as the data of universities, especially not for master programmes. An explanation for this can be found in the fact that there exists a binary system in Dutch higher education, in which it is often difficult to distinguish (in an international arena) the differences between universities and hogescholen. In general, it has already been proven difficult to differentiate between the study programmes offered by universities. However, the picture becomes even more complicated and blurred when it has to become visible, in the international arena, what the differences are between programmes offered by universities or hogescholen. Nevertheless, it can be said that this obstacle is not only present in the Netherlands but also in unitary systems. In the English context for example, the same dilemma occurs when a distinction has to be made between Bath Spa University (almost similar to a hogeschool) and Bath University (highly research oriented university). Accordingly, more attention is needed in order to tackle this dilemma.
For students it is important that factual differences on the content of the study programme become visible. On which aspects does a Bachelor Economics in Maastricht differ from the same bachelor in Tilburg or Paris? Especially on this type of information it is vital to be able to find out why these programmes differ in order to choose which programme suits the student best. Moreover, it is important that subjective student judgements form an integral part of a student information system. However, this should always be combined with factual information. The following example make this statement perfectly clear. As a student I am interested in the factual student-computer ratio, and not in the opinion of another student who argues that the amount of computers is enough or not enough. The latter obviously tells me less than the former does. Therefore, it is advisable to incorporate as much factual data in information systems.
For an international classification instrument, adaptation is of key importance. With this it is meant that not only a translation of the information system’s website is required. Definitions of the indicators need different answering categories and wording dependent upon the target groups. An international information system should be applicable to all users independent from their country of origin.
In concluding words, it is an illusion to argue that the ideal student information system already exists. At the moment, the methodological weaknesses still outweigh the strengths. And although absolute indicators for measuring the quality of higher education institutions do not exist, it is essential to think about how to organise these information systems. Although, it has been proven to be a difficult job, it is important that we do not resist them. We definitely need international information tools in future; especially if it is considered that mobility rates will augment. Now is the time to cooperate, to give input and guide the process Together we have to define what student information systems should and should not be, and organise an in- depth discussion on how methodologies might be improved. Only then it is possible to create an accurate classification system for higher education, which can serve as the basis for matching. Otherwise somebody else will do it for you. To be classified is something we should definitely prevent from happening.
Board member ISO