Beyond the bean counting

Nieuws | de redactie
7 juni 2012 | Accountability is the latest fashion in assessing government policies. But at what point does healthy evaluation turn into an unhealthy obsession with monitoring and measuring?

Under the heading “not everything that counts can be counted”,the League of European Research Universities has opened up thediscussion on the pros and cons of evaluating the fruits ofuniversity research. Under auspices of Dr. Mary Philips (formerdirector of research planning at University College London), LERUis currently working on a paper that will definitely stir up ahealthy debate.

Growth in research evaluation

Governments have by and large understood the need to invest inresearch because it is a vital engine driving innovative,knowledge-driven economies. It goes without saying that they – andfunders of research in general – want to evaluate the fruits oftheir investment. But the growth in research evaluation regimesrisks to create (has already done so, one may argue) a sometimesunhealthy obsession with monitoring and measuring research.

This potentially has some undesirable consequences: demands onuniversities to produce excessive amounts of data straining finitehuman and financial resources, unhelpful or conflicting duplicationof various assessment exercises, a short-sighted “bean counting”culture and other practices that detract from rather than supporthigh quality research.

Return on investment

Of course universities themselves assess the research performedwithin their walls for a variety of reasons. Along with governmentsand research funders they want to gauge research output,quality and impact, improve performance and maximise return oninvestment.

Universities are also interested in research assessment as a wayto inform strategic planning and positioning of the university, toinvest in areas of research strengths or new directions, to exposeweaknesses, to identify and track individual accomplishments, torecruit, retain or reward top performers, to find and fosterproductive research collaborations, etc.

Evidence of usefulness

Even when it’s clear who wants to assess research for whatpurpose, further challenges stem from the fact that assessment canbe performed in different ways. Peer review (basically asking otherresearchers to evaluate the research) is a wide-spread method, butcostly, time-consuming and open to subjectivity or bias.

Another method is to collect bibliometric data including numberof publications, citation frequency etc. While it is less costly,the bibliometric approach also has drawbacks. Moreover, a new trendis to show research impact, i.e. evidence that a piece of researchis in a broad sense useful to society. While impact has its place,it should be understood that it is not the driving force ofresearch. Whatever the method, clearly there is a need forsophisticated tools, but it is equally necessary to understandtheir limits.

Central databases

What can be done in practice? Universities for example need tohave ample human expertise and sophisticated research assessmenttools suited to the task of assessing universities’ researchstrengths and weaknesses. They should maintain central databasescapable of producing fine-grained, accurate and up-to-date HR andresearch data. They can also support the emerging practice of usingunique personal identifiers to avoid ambiguities about researchers’correct names.

External agencies should avoid creating perverse incentives foruniversities and researchers and should ensure consistency forreliable comparisons locally and internationally. Above all, inassessing university research they need to appreciate that researchoften has a long term outlook rather than a concern with immediatereturn on investment.

Sense and sensibility

Our main point is to call for a sensible approach to researchassessment. Governments, universities and others should “assessassessment”, carefully looking at what works in different researchenvironments and building on good practice where there is validevidence that the process leads to demonstrable improvements inproductivity and impact.

The paper will be presented during a breakfast launch event inBrussels on 19 June 2012. More information at

Schrijf je in voor onze nieuwsbrief
ScienceGuide is bij wet verplicht je toestemming te vragen voor het gebruik van cookies.
Lees hier over ons cookiebeleid en klik op OK om akkoord te gaan