Off to ARMA… Back Soon!

back soon by Jeremy Grites CC BY-NC-ND 2It’s that time of year again – most of us are off to the ARMA (Association of Research Managers and Administrators) conference next week in Nottingham, so the blog will be a bit quiet for a week or so.

This year’s highlights include a session on public engagement and impact by our very own Alexandra Robson and Samantha King. I’ll also be taking part in a “Special Interest Group” on the whys and wherefores of Research Development (look out for an article soon in Research Professional on this topic written by Phil Ward), and there’ll be plenty to get our teeth into on Horizon 2020 and looking beyond the REF.

Hardy soul that she is, Sam’s going from ARMA straight to the UKRO conference in Edinburgh to learn even more about European funding. The idea is that we’ll all gain some insights and intelligence on research funding and related issues which we will feed back to you via the blog and in briefings and workshops over the summer.

The blog will be back up and running in mid-late June as usual.


Bibliometrics for Beginners – Workshop Report

I can read a book by CC BY-NC-ND 2.0Our Assistant Director of Research, Ruth Hattam, recently attended a workshop jointly run and sponsored by ARMA and Thomson Reuters on Bibliometrics for Beginners. Bibliometrics – a key part of which is use of citation data – is growing in importance in Higher Education, particularly as research funding becomes more competitive and institutions need ways to analyse strengths and target key areas for support.

Bibliometrics can to some extent allow strategic oversight of research activity and performance, although it does have several drawbacks and limitations, some of which were covered in the workshop.

Read on to find out more and for an overview of the day.

Bibliometrics: What is it and how is it used?

Bibliometrics literally means ‘measure of books’.

Use of citation indexes is clearly key to bibliometrics.  A number were mentioned including Web of Science (Thomson Reuters), Scopus (Elsevier) – both databases – and Google Scholar, which also includes internet sources.  There are also regional/subject-specific databases.  As different editorial policies apply only one database should be used per comparison – mixing and matching isn’t encouraged as it wouldn’t give a balanced picture.

Possible uses of bibliometric data include:

  • individual review and recruitment;
  • University rankings;
  • REF 2014 will use citations for the first time (not in all Panels but 3, 4 and 11 will);
  • and grant applications

In terms of metrics, the h-index was developed to measure quality and quantity. This is the point at which the number of papers which have been most cited is equal to the number of citations (for example, an academic with 378 papers – 48 of which had been cited 48 times).  Self-citation is one of possible pitfalls when using h-indices – but this figure can be removed from the analysis.

While citation based analysis like this can be useful, one needs to bear in mind that it doesn’t always take into account the nature of the citation itself. For example, a paper can be cited because the author: wants to build on prior knowledge; agrees or disagrees with the analysis; wants to help or hinder other researchers; wants to disprove the conclusion; and to improve their own impact factor. Outliers can also skew the results significantly.  Bear in mind that people can look very good on paper even though they are no longer researching, for example Aristotle!

Data: From indexing to indicators

It’s important to understand what various bibliometric databases do and don’t include: First, they don’t contain all journals – 80 % of papers are published in 40% of journals, so databases don’t try to capture 100% of journals.  Google Scholar is much more inclusive because it catches more publications, but the flip side is that these are not necessarily as high quality.

Second, it is worth considering how data is collected. Thomson Reuters (TR) uses:

  • publishing standards e.g. peer review, editorial conventions, TR have subject specialists who assess content;
  • editorial content e.g. TR have subject specialists;
  • diversity, regional influence of authors;
  • citation analysis – for new journals there is an analysis of editor and authors’ prior work

Third, what kind of outputs are indexed? The majority of citations from books are within arts and humanities and social sciences. Science subject nearly always cite other journals which reflects the speed at which field moves. Where books are concerned TR insist on original research and exclude textbooks.

Fourth, how are the data organised? TR has 249 subject areas, and has incorporated REF categories.

While there are clear advantages to using citation analysis, there are also a number of limitations:

  • productivity is volume not quality, although you could argue that has been quality tested (i.e. peer reviewed) to get into the journal in first place;
  • number of self-citations – the h-index would not distinguish these;
  • it is papers and not the person being cited;
  • stage of career not factored in: established researchers have higher productivity, citation count, and h index, so you have to normalise for publication year. One approach is to divide citation count by number of active years of research.  TR compares each paper only with other papers of same year – look at average number of citations papers received in that year;
  • subject differences need to be factored in. Need to do comparisons by subject not within university.  TR normalise by subject category and academic year.  Also need to distinguish between outputs, e.g. original research versus reviews so need to look at document type;
  • value of citation not assessed – will include negative citations;
  • relative contribution of each author on a paper not known;
  • number of authors on a paper not known. Can normalise by calculating average number of authors per paper calculation;
  • does not automatically account of differences in subject field: there are lots of initial citations in sciences, then the field moves on – mathematics is a low cited field and number of citations is more constant.

Two other bibliometric analyses are worth considering here: Journal Impact Factor (JIF) and Eigenfactor metrics:

JIF looks at impact of journal in a particular research community over the last 2 years based on number of citations. This is then normalised for size of journal.  The impact factor is the number of citations divided by number of articles published in that journal. This is not as good an indicator for slow moving fields because it only goes two years back. It is good at capturing high level activity for fast moving subjects, e.g. natural sciences, engineering, and can inform where to publish in those subjects. JIF has developed 5 year impact factor to take account of subject differences.

Eigenfactor metrics were developed at University of Washington by Jevin West and Carl Bergstrom. From Wikipedia: Eigenfactor is a rating of the total importance of a scientific journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals. Eigenfactor scores and article influence scores are freely viewable on

Conclusion: International issues and the future of bibliometrics

Citation data can be used to examine the extent of international collaborations of researchers or institutions.  Data showed that working with international collaborators increases the number of citations.

It can also be used by a university to look at citation data relative to number of published papers.  Universities are starting to look at citations and other factors, e.g. amount of industry income brought in per researcher, number of doctoral degrees awarded per member of academic staff – based on data from HESA.  Thomson Reuters also combine with citation analysis with other data sources to perform more fine-grained analyses.

The point was made that China might be bringing down average due to massive increase in growth of number of papers but relatively low citation rate.

Ultimately before doing any sort of analysis or evaluation, you need to clearly define your objectives – what do you want to know and what will the data inform?


For any Northumbria staff interested in finding out more about the way bibliometrics can be used, it’s worth noting that the Library provides online instructional support in using such tools in the Measuring your Research Performance  section of Skills Plus:


Research Councils in 2013, Part 1: Harmonisation, Demand Management and Early Career Researchers

This is part 1 of a 2-part series. I’ll post the second part tomorrow.

Polaris House, Swindon was the location of the latest joint ARMA (Association of Research Managers and Administrators) RCUK “Study Tour” which took place yesterday. We’ve been to these kinds of events before, but this was a little different. All previous Study Tours I’ve attended have been hosted by a single Research Council or funder, whereas this was a joint effort with representatives from all seven RCs, plus the “Shared Service Centre” – the back office for all RCs – and Je-S help desk. There was also a conscious attempt throughout most of the sessions to be more interactive, and the programme was pitched at “senior” research managers with a promise of greater discussion of policy and future strategic directions for RCs.

The key word was “harmonisation“: Peter Hollinswaite (Business Manager at MRC) set the tone by announcing that the RCs have now reached a more or less “stable state”, following a 2-3 year process during which they have moved to a single physical location and aligned pre- and post-award processes (all Councils now use Je-S, for example). However care was taken to distinguish harmonisation from ‘standardisation’. There was a recognition that different RCs serve different academic and user communities with distinct needs, so business models may differ – for example in the way they support postgraduate students, though even here there have been increasingly harmonised moves towards “block grant” models vs the old individual and project grant studentships across all the Councils.

The usual stats and numbers were rattled through first to give some context:

  • RCs processed 14,000 applications in 2012
  • There has been an increase in success rates from 18% to 26% across all schemes over the past 2 years
  • Demand has fallen 5% per annum over the past 2 years

Peter said that the next phase of cross-Council harmonisation would include: further simplification and rationalisation of funding schemes; a review of the process of peer review; scrutiny of terms and conditions and guidance to reduce confusion. As part of this process RCs will be carrying out surveys with various stakeholder groups, including research admin offices in universities.

The perennial topic of “Demand Management” was the focus of Gerald Owenson’s (BBSRC) discussion session. He outlined a number of measures, which he labelled ‘direct’ and ‘indirect’, introduced over the past few years which have led to the reduction in numbers of bids and consequent increase in success rates. Direct measures include:

  • Resubmissions are now generally not accepted by RCs unless invited (NERC is an exception – you can resubmit after 9 months)
  • Use of outline or preliminary stage applications has increased – the rationale is that outlines require less paperwork and so take less time for both applicant and RC to process. However, I’d argue that significant work goes on ‘behind the scenes’, particularly in terms of costing and partnership formation, which although not present in the submitted bid nevertheless take a significant amount of time
  • EPSRC has introduced individual researcher sanctions which limit repeatedly unsuccessful applicants to one bid per year. This has been controversial but has increased EPSRC success rates significantly, though other RCs have been reluctant to follow suit

Indirect demand management measures include:

  • Providing feedback to PI and Research Organisation (RO), including peer review and panel meeting comments. Peer review comments are not currently systematically returned to ROs, but Peter indicated this is set to change
  • Encouraging ROs to undertake internal assessment or peer review  of bids before submission, which most universities do to some extent
  • On this last point in particular, Gerald encouraged ROs to make use of their own internal staff resources, including the “insider knowledge” of people who are on RC peer review colleges and panels. I suggested to him later that it would be useful to offer opportunities for academics and research managers to sit in on RC panel meetings, in order to broaden experience. However he indicated this would be difficult due to limited space in the panel meeting rooms!

Kirsty Grainger and Avril Allman (both NERC) emphasised the importance of PhD studentships and Early Career Researchers to Research Council future plans. Of the annual UK output of 17400 PhDs, 5000 are RC-funded. For some Councils around 50% of funding is invested in PhDs. Increasingly measures to secure fitness for employment is seen as a key part of student training programmes. In addition all Councils now encourage interdisciplinary studentships, although there must be a lead Council. There was a suggestion that there may be specific joint-Council interdisciplinary calls for studentships in future.

Find out what the Research Councils and universities think about Je-S, as well as some insight on EPSRC and NERC priorities for 2013 in Part 2 tomorrow.


#arma2012: Using Social Media in Research Support

Most of us have now returned from this year’s ARMA conference (apart from those who’ve bravely stayed on another few days for the PraxisUnico conference). A full write-up will follow, but in short the event was largely excellent: I attended several exceptionally interesting and useful workshops, made new connections with colleagues across the sector, and ate far too much rich food!

As previewed earlier this week, Julie Northam (Bournemouth), Adam Golberg (Nottingham), Phil Ward (Kent) and myself presented a workshop on “Using Social Media in Research Support“. It seemed to go down very well, and we’ve received positive feedback both in person and on Twitter following the event. Here are the slides – comments greatly appreciated:

In general it felt to me like more delegates were engaging with social media right from the start of the conference. But don’t just take my word for it: according to SearchHash the #arma2012 hashtag saw 327 tweets from 74 users. I wasn’t able to get the numbers for last year’s conference, because Twitter makes it difficult to find older tweets, but this is a significant increase. Most tweets were over the 2 days of the conference, with peak times being Tuesday and Wednesday mornings (86 and 82 tweets respectively).

Peoplebrowsr reckons the most popular hashtags of the conference (apart from #arma2012 of course) were: #openaccess, #researchproposals, #ktp, #ref2014 and, er, #zilchopilcho. The top tweeters at the event (based on Klout score) were @frootle (Phil Ward, Kent), @lostmoya (aka yours truly), @brookes_ktp (Emily Brown, Oxford Brookes), @cash4questions (Adam Golberg, Nottingham), @dpotta (Dave Potter, Joseph Rowntree) and @annamgrey (Anna Grey, York).


Preparing for ARMA 2012

A few of us from RBS are heading down to Southampton tonight for this year’s Association of Research Managers and Administrators conference, subtitled “Making a Difference!”. Taking place over 12th-13th June, the conference provides an opportunity to hear from our peers on a range of topics related to research management, from ways to improve pre-award support to increasing the impact from funded research.

I’m also involved in co-delivering a parallel session on Using Social Media in Research Support, alongside Julie Northam (Bournemouth), Adam Golberg (Nottingham), and Phil Ward (Kent). We’ll be looking at how to use different types of social media in our professional lives and hopefully demonstrating that Twitter isn’t just about discussing what you had for breakfast. There’ll also be an opportunity to take part in a practical demonstration of the value of social media when we invite feedback and comments on the session via our blogs!

Here’s our session abstract:

Blogs, Twitter, Facebook, Wikis, and LinkedIn are all examples of ‘social media’ – methods of internet communication that allow the exchange of ideas, sharing and collaborative creation of resources, and making new contacts with people with common interests. How might universities make use of social media in research support? What works, what doesn’t, and why? This session will include an introduction to social media, and presentations of case studies about university research offices who are already using social media, particularly blogs, and from individuals using social media to expand their own professional networks.


AHRC Study Tour 2012 – Peer Review and Fellowships

This is the third part of a series on the AHRC 2012 Study Tour.

Dr Sue Carver gave a short overview and update on the AHRC’s peer review process. A bit of background first: The AHRC Peer Review College (PRC) was established in 2004 with 460 members, and now has over 1,300 across five different groupings: academic, knowledge exchange, international, strategic, and technical. Peer reviewers from these groupings are sometimes used for different elements of a single application or for applications to different calls, for example, where a bid includes a technical appendix. The AHRC has a service level agreement of sorts with each reviewer to ensure that they get no more than 8 proposals per year to review.

Membership of the PRC is valuable both for institutions (used in internal peer review processes) and individuals (esteem and improving knowledge of writing bids). There is clearly a high degree of awareness among many research-active staff: a recent AHRC call for membership of the PRC drew in over 300 applications for membership, despite the fact that the call was targeted towards particular themes and research areas.

In terms of the process for funding applications at AHRC, the PRC is a central part of the picture:

  • Application submitted via Je-S
  • Checked in AHRC for eligibility
  • 3 peer reviewers are selected from PRC, plus 1 technical reviewer where necessary
  • Reviews are completed, returned and checked for quality
  • Applications are sifted in AHRC: if not 2 fundable grades then it is rejected
  • Applicant notified and given opportunity to respond to reviews
  • Application, reviews and PI response are forwarded to panel
  • Panel meets and makes decisions on academic ranking of applications
  • AHRC funds as far as they can down this list until funding runs out

Four key points were made in the update:

  1. A new grading scale (from 1-6) came into use from 1st December 2011. This means that grading is now harmonised across all research councils;
  2. From 1st April 2012, resubmission to AHRC will be by invitation only. Again this brings AHRC into line with other research councils, such as the ESRC. This policy is also an important part of their demand management strategy;
  3. In response to a question on the place impact in peer review, we were told that this comes in as a secondary criterion during ranking of proposals. Where two proposals are equivalent in scientific excellence then the one with the higher quality Pathways to Impact statement will be ranked higher;
  4. Where the AHRC has invited applications through highlight notices, a decision will be taken internally in AHRC as to whether to fund additional highlighted projects which may be ranked lower in the scale.

Katherine Warren, AHRC’s Strategy and Development Manager, focused on the recent major changes to the Fellowships scheme.

The most significant change is that AHRC is moving away from funding Fellowships which support “completion” to supporting visionary individuals with the potential to set research agendas. Often the research supported will be at an earlier stage of development. In addition:

  • Fellowships will adhere to the “longer and larger” maxim: 6 months minimum, 18 months maximum (or 24 months maximum for the early career route);
  • There will be fewer and more prestigious awards. Katherine made the point that the AHRC will “only fund the best”. They see this as one way to increase impact from the awards they make;
  • Fellowships will be used to sustain undersupported subject areas and to bolster AHRC’s strategic priorities.

In the early career fellowships route, part of the focus will be on helping the fellow to develop their people management skills. In this regard it will be more like the ESRC’s Future Research Leaders scheme. But the HEI must also be willing to demonstrate its support of the potential candidate: there must be an existing contract in place and further commitment (such as sabbaticals, internal funding, and training for leadership) should be evidenced in the Head of Department’s supporting statement.

Katherine emphasised the importance of internal selection of candidates including linkage and alignment of a candidate’s programme of research to institutional research strategies and aims. The steer was very much to get institutions thinking: “who are our top few people?”. However, the AHRC have stopped short of putting limits on numbers of applications though, much like the ESRC’s position on Future Research Leaders, they have indicated that they will keep this policy under review.


AHRC Study Tour 2012 – Introduction

Last Friday Sam King and I took the long train down from Newcastle to Swindon to visit the Arts and Humanities Research Council for an ARMA-sponsored “Study Tour” (it took 5 and a half hours to get there – a substantial journey, but not quite the epic 7 and a half hours it took me last time I visited Research Councils HQ). AHRC have helpfully uploaded the agenda and all of the presentations on their website:

AHRC/ARMA Study Tour 2012

Study Tours are a useful opportunity for research support staff like us to meet with research council staff and hear about their latest strategic priorities, discuss any policy shifts, and find out about new or revamped funding opportunities. Despite having the smallest budget of all the UK research councils, the AHRC is the primary funder for many researchers in arts, humanities and related disciplines. One of the messages which came across clearly throughout the day was that their funding has a significant effect on the research community and the UK’s economic, social and cultural well-being (see, for example, their recently published impact report for 2011).

Over the course of this week, we’ll be writing about the main insights and messages from the day on this blog. We’ll also arrange an AHRC update event in the near future open to all Northumbria staff to elaborate on some of the key points and discuss potential opportunities for funding. If you’d like to ask any questions in the meantime, please either leave a comment on the blog or contact us.

Here’s the full list of posts:

  1. Mark Llewellyn on Future Strategic Directions
  2. ‘Emerging Themes’ Overview
  3. Peer Review and Fellowships
  4. Knowledge Exchange and International Opportunities
  5. Research Careers, Block Grant Partnerships and Final Questions