The use of metrics in research assessment.

Ruth Hattam (Assistant Director for Research) recently attended a session about the prospects and pitfalls around the use of metrics in research assessment. The event was hosted by SPRU (Science and Policy Research Unit) based at the University of Sussex, which is undertaking the HEFCE review of metrics, the report for which is due in June 2015.  There was no indication of the likely outcomes, and Steven Hill (Head of Research Policy, HEFCE) was keen to stress that no decision had been made about metrics and the next REF.

The event was well-balanced with a variety of views on the issue from a number of speakers.   There seemed to be a broad consensus that metrics alone should not be used to assess research, with general support for a mix of qualitative and quantitative approaches, although which should come first, or have prominence, was not resolved.

As an observation, those speakers with an interest in promoting metrics were careful to stress that metrics are only one indicator, whilst some speakers on the other side of the debate were more forceful in their criticism of the use of metrics, arguing that they were an unreliable means of assessment.  One speaker used his own citations to illustrate this point, asserting that his most frequently cited articles did not correlate with his best research. Some of the other general discussion points included: metrics could only potentially be useful as an indicator of significance in the three REF criteria for outputs (originality, significance and rigour); issues around impact metrics; peer review is a far from perfect system, potentially subject to individual bias; the public interest should dominate; use of Altmetrics (e.g. social media, blog posts – anything that isn’t citation-based).

The event featured a ‘metrics bazaar’ which allowed participants to explore metric tools and platforms with a range of developers and providers.   Of interest was an overview of ‘The Conversation’ which is an independent source of news and views sourced from the academic and research community and delivered direct to the public.

The afternoon session explored the ‘darker side of metrics’, although the speakers did not perhaps delve into some of the gaming practices which have been unearthed (e.g. self-citation malpractice uncovered at the Journal for Business Ethics.  Some of the discussion points included: the number of retractions is on the rise including  in ‘prestigious’ journals;  the sector had to be realistic and accept the principle of measurement as had other public-funded sectors (e.g. health); that use of metrics would potentially change behaviour; the term ‘metrics’ should be replaced by the term ‘indicators’;  arts and humanities academics needed to engage in the debate.