Under the heading "not everything that counts can be counted", the League of European Research Universities has opened up the discussion on the pros and cons of evaluating the fruits of university research. Under auspices of Dr. Mary Philips (former director of research planning at University College London), LERU is currently working on a paper that will definitely stir up a healthy debate.
Growth in research evaluation
Governments have by and large understood the need to invest in research because it is a vital engine driving innovative, knowledge-driven economies. It goes without saying that they - and funders of research in general - want to evaluate the fruits of their investment. But the growth in research evaluation regimes risks to create (has already done so, one may argue) a sometimes unhealthy obsession with monitoring and measuring research.
This potentially has some undesirable consequences: demands on universities to produce excessive amounts of data straining finite human and financial resources, unhelpful or conflicting duplication of various assessment exercises, a short-sighted "bean counting" culture and other practices that detract from rather than support high quality research.
Return on investment
Of course universities themselves assess the research performed within their walls for a variety of reasons. Along with governments and research funders they want to gauge research output, quality and impact, improve performance and maximise return on investment.
Universities are also interested in research assessment as a way to inform strategic planning and positioning of the university, to invest in areas of research strengths or new directions, to expose weaknesses, to identify and track individual accomplishments, to recruit, retain or reward top performers, to find and foster productive research collaborations, etc.
Evidence of usefulness
Even when it's clear who wants to assess research for what purpose, further challenges stem from the fact that assessment can be performed in different ways. Peer review (basically asking other researchers to evaluate the research) is a wide-spread method, but costly, time-consuming and open to subjectivity or bias.
Another method is to collect bibliometric data including number of publications, citation frequency etc. While it is less costly, the bibliometric approach also has drawbacks. Moreover, a new trend is to show research impact, i.e. evidence that a piece of research is in a broad sense useful to society. While impact has its place, it should be understood that it is not the driving force of research. Whatever the method, clearly there is a need for sophisticated tools, but it is equally necessary to understand their limits.
What can be done in practice? Universities for example need to have ample human expertise and sophisticated research assessment tools suited to the task of assessing universities' research strengths and weaknesses. They should maintain central databases capable of producing fine-grained, accurate and up-to-date HR and research data. They can also support the emerging practice of using unique personal identifiers to avoid ambiguities about researchers' correct names.
External agencies should avoid creating perverse incentives for universities and researchers and should ensure consistency for reliable comparisons locally and internationally. Above all, in assessing university research they need to appreciate that research often has a long term outlook rather than a concern with immediate return on investment.
Sense and sensibility
Our main point is to call for a sensible approach to research assessment. Governments, universities and others should "assess assessment", carefully looking at what works in different research environments and building on good practice where there is valid evidence that the process leads to demonstrable improvements in productivity and impact.
The paper will be presented during a breakfast launch event in Brussels on 19 June 2012. More information at www.leru.org.