Principles to promote responsible use of research metrics

In Michaelmas term 2019, Research and Innovation Committee approved a set of ten principles to promote responsible use of research metrics at Oxford. With these principles the University demonstrates its support for DORA (San Francisco Declaration on Research Assessment) to which the University became a signatory in 2018.

 

The University is committed to a research environment founded on fairness, consistency, accountability and inclusivity, which is transparent in assessment, and responsive to discipline specific needs and practices.  

Research metrics are used in the evaluation of individuals and in order to support appropriate use and avoid inappropriate use, this set of ten principles have been drafted. They are best practice guidance to all members of the collegiate University when undertaking any form of research assessment that can have a direct impact on individuals, either as single individuals or as a member of a group of individuals.

The principles to promote responsible use of research metrics have been informed by the three most important sources of best practice guidance and advice:

When selecting to use research metrics, the selected metrics and use thereof should be tested against all ten principles.

Each principle contains a short explanation of what is meant by it.

 

1. Research metrics should only be used to inform and support and not supplant qualitative expert assessment.

Leiden manifesto principle 1.

Peer review remains the method of choice for assessment. Quantitative indicators can be a valuable source of additional information and assist in making more equitable judgements.

 

2. Journal Impact Factors should not be used as a surrogate measure of the quality of individual research articles.

DORA institutional commitment 1

This principle reinforces the University’s code of practice for REF2021 which requires each UOA coordinator to develop a process based on the following guidelines:

  • Selection decisions should be based on peer review conducted by at least two subject matter experts;
  • External peer review should be undertaken where possible;
  • Interdisciplinary research and non-traditional outputs should not be disadvantaged.

 

3. Where research metrics are considered in assessment of individuals, including recruitment, probation, performance, reward and promotion, these should be clearly stated in the guidance and application documentation. In addition, this documentation should also confirm that the research content of a paper is much more important than publication metrics or the identity of the journal in which it was published.

DORA institutional commitment 1

This principle further expands on the University’s commitment in signing up to DORA, ie journal level metrics will not be used when assessing individuals.

 

4. Research metrics should be selected that best reflect the nature of the research discipline in terms of publication, citation and (external) funding practices, other types of research outputs and outcomes, impact, collaboration, supervision and career paths. Normalised metrics should be used where these are available and robust.

Leiden manifesto principle 6; DORA institutional commitment 2

How informative research metrics are is dependent on the discipline. Generally, research metrics are more informative in the natural sciences and medical sciences’ disciplines than in the social sciences and humanities’ disciplines. The aim of this principle is also to give recognition to the many different facets of research and therefore researcher career paths

Although normalised metrics are considered the best choice of metrics, these are not always available and through normalisation can lose some of their transparency. Furthermore, when assessing within the same discipline, normalisation is less important.

 

5. The selection of research metrics should be accompanied by information on the source, the format and level of precision (eg number of decimals), definition and context, including systemic effects, weaknesses.

Leiden manifesto principles 5, 8 and 9

Ability of verification is key in ensuring that research metrics are used properly and the ability to challenge and critique metrics is vital to the quality and credibility of metrics.

 

6. When choosing to use research metrics in assessment, no single metric should be used in isolation.

Leiden manifesto principle 7

This set of principles form best practice guidance for the assessment of individuals or a group of individuals. One metric provides one perspective and is unlikely to assist in robust evidence-based decision making. When evaluating an individual or group of individuals the aim should be to have a suite or basket of metrics or when only one (responsible) research metric is available, this metric should be used in combination with qualitative information.

 

7. Research metrics should be applied at the appropriate level of granularity. When evaluating individual researchers, metrics related to an individual’s performance should be used.

This principle is in line with the University’s commitment in signing up to DORA ie Journal Impact Factors or other journal level metrics will not be used when assessing individuals, but also applies to the selection of other metrics.

 

8. When employing research metrics for comparative evaluation, whether between individual researchers or groups of researchers, the methodology applied to the research metrics for the comparative evaluation should be made available to any individuals directly affected.

Transparency is key in responsible use of research metrics not only in the selection and presentation of metrics but also in their use in assessment.

 

9. The selection of research metrics that reflect or introduce bias (eg gender) should be avoided or otherwise addressed in the relevant assessment.

The University has an active duty to consider the impact on equality in all decision making and selection of metrics that could have a negative impact on equality should be avoided or mitigated.

 

10. Research metrics should be scrutinised regularly to make sure they are still fit for purpose, taking into account research metrics that have become recently available and ‘gaming’ practices.

Leiden manifesto principle 10

Systems of assessment can influence the research metrics employed; the research landscape is changing continuously; the priorities of assessment are subject to change; and due to advances in technology and data capture, new research metrics are coming available. It is therefore important that regular scrutiny takes place to ensure the selected metrics are still fit for purpose.

To support the recruitment and reward of outstanding researchers, we have developed guidance that lists practices to avoid and suggests positive alternatives to promote the adoption of responsible evaluation practices. Please see our Guidance for Implementation document.

 

The implementation of the 10 principles to promote the responsible use of research metrics is first focused on the University meeting its commitment as a signatory of DORA, the San Francisco Declaration of Research Assessment:

  1. Be explicit about the criteria used to reach hiring, tenure, and promotion decisions, clearly highlighting, especially for early-stage investigators, that the content of a paper is much more important than publication metrics or the identity of the journal in which it was published.
  2. For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

Implementation in recruitment and renewal processes

The first phase of the implementation of DORA in recruitment and renewal processes is the Associate Professor Inclusive Recruitment project. This project is sponsored by the Pro-Vice-Chancellor for People & Gardens, Libraries and Museums together with the Head of Recruitment in HR and is running in 2020/21.

Project aims

  1. Understand the steps required for broadening the race and gender diversity of associate professors (APs)
  2. Understand the impact DORA will have on the AP recruitment process and collaboratively identify practical solutions
  3. Ensure consistency of approach to AP recruitment, whilst still recognising and retaining important differences

Planned project outputs

  • a set of standards for the AP recruitment process which increases the diversity of the candidate pool and meets DORA principles for research assessment
  • updated advice to applicants about research assessment and templates for adverts and job descriptions and selection criteria

Ensuring sustainability

  • ongoing data collection
  • survey stakeholders
  • HR Self Assurance
  • keeping in touch with key contacts

 

The outcomes and outputs of this project will form the basis for implementing DORA in:

  • the recruitment of research staff and statutory professors
  • reappointment to retirement of associate professors in the initial period of office (IPO)
  • recognition of Distinction Awards to confer the title of professor

Implementation in research assessment

The University’s code of practice for REF2021 sets out the process for selecting outputs for research assessment and required each UOA coordinator to develop a process based on the following guidelines:

  • selection decisions should be based on peer review conducted by at least two subject matter experts
  • external peer review should be undertaken where possible
  • interdisciplinary research and non-traditional outputs should not be disadvantaged

The code of practice also explicitly stated that research metrics should be used responsibly, in accordance with DORA, therefore excluding the use of journal impact factors.