Snowball Metrics: creating global research benchmarks that institutions actually want to use

By John T Green and Lisa Colledge -

You can hear the personal perspective of Dr Malcolm Edwards, Head of the Planning and Resource Allocation Office at the University of Cambridge, by watching the case-study video, above.

All researchers aspire to achieve international acclaim, and to make a difference, through innovation and excellent research. Research institutions drive their own success through the performance of these individuals, and through striving to attract excellent researchers and to retain their high flyers. They measure their success both by the outcomes of research and by the quality of their teaching,

Research success can to some extent be illustrated by measures such as: number of awards received; volume of scholarly output such as peer reviewed publications; and counts of citations received from other academics. Financial return-on-investment can be demonstrated by indicators such as licensing income, spin-off activity and societal impact. However, a useful understanding of success can only be gained by looking at an institution’s performance within the national and global context of their peers, numbers in isolation are extremely difficult to interpret.

Ideally institutions should have all of this information available in a strategic ‘dashboard’ that provides a menu of the latest available figures, providing insights into key activities. Many institutions do generate figures to track their performance in some areas, and their methods have typically evolved to suit the data they have available, the data structure, and the systems that the data happen to sit in. Sometimes, the selection of one approach over another may even have been influenced by a desire to make the performance look as good as it can possibly be for various external reporting or regulatory requirements. However, data, structure, and systems differ widely between institutions. How then does this line up with the need to look at an institution’s performance in relation to others? Even where they do exist how useful are these dashboards without this context?

The answer is that this multitude of different measures used across institutions makes it impossible to rigorously benchmark performance with others.  Any attempt to harmonise at least some of the measures across institutions would vastly increase the usefulness of strategic dashboards. The current state of affairs prevents institutions from using all of the data and information which are available to them – that is, their own institutional data as well as data held by government, commercial companies and other third parties –  to properly understand their strengths and weaknesses. This hampers them from developing truly market-aware strategies to achieve their potential. This situation is worsened by a feeling within universities that the data they collect, and the metrics they calculate, are driven by the external needs of government and funders to whom they are accountable and must return reports, rather than through a consideration of their own needs and ambitions.

What could be a solution? The Snowball Metrics initiative is a collaboration of eight highly successful research universities, including Oxford and Cambridge, with a vision to remedy this situation on a global scale. The aim is to use all sources of data available for benchmarking, within every area relevant to the strategy of an institution. What this means is that the representatives from these eight universities have agreed a single approach of generating the metrics they consider important to provide strategic insights.

This may sound simple, but in practise it is extremely challenging. These eight universities need to find a single way in which their distinct data sources, held in different systems, can all be combined with commercial and third party data sources, and used to generate consistent metrics. It is also critical to test whether the desired metrics can be calculated from real data, and this test is performed by the commercial project partner, Elsevier under the direction of the institutions. Clarity, consensus and practicality are essential to ensure that it makes sense to compare metrics between institutions, that any differences truly represent a difference in performance and not a difference in interpretation or calculation, and that those responsible for setting and tracking the institutional vision are confident in taking decisions based on this intelligence.

The vision is to be able to benchmark in this way on a global scale, so it is clearly not enough for just these eight universities to use the Snowball Metrics. To this end, the agreed and tested methods are published free-of-charge in the Snowball Metrics Recipe Book, available at www.snowballmetrics.com/metrics. None of the project partners, neither the institutions nor Elsevier, will ever charge for these recipes, and anyone is free to implement them in their own systems for their own purposes. It is intended that, from a small start with these eight institutions, this approach will “pollinate” other institutions around the world, and that the initiative will “snowball” to a global scale.

This form of fertilisation will not happen overnight, nor will it happen on its own. The initiative is taking a two-pronged approach towards giving the snowball a push, so that it will eventually keep rolling on its own. Firstly, it is encouraging suppliers of research information to implement the Snowball Metrics recipes in their tools. Elsevier is the first supplier to implement these recipes in its global tools, and it is hoped that many others will follow. Secondly, it is forming similar clusters of institutions in other high research capacity nations, to date the United States and Australia / New Zealand, to follow in the footsteps of the original UK group and ensure that the recipes can be generated from all the data sources available in different national contexts.

This is a long-term, difficult and ambitious initiative, which has often been met with a degree of scepticism, especially due to the involvement of a commercial supplier. So why are the project partners involved? The appeal of being in one of the driving seats of a consensual, democratic process that aims to generate an international language of metrics is very strong. There is also a chance to influence governments and funders, who have had such a large influence on the data collected and metrics generated within institutions, to adopt the Snowball Metrics standards. This would result in enormous efficiency savings that could be invested back into the core businesses of universities: research and teaching.

John T Green is Chair of the Snowball Metrics Steering Group.

Lisa Colledge is the Snowball Metrics Programme Director.

 

0 Comments

You can be the first one to leave a comment.

Leave a Comment

 
 




 


1 + 1 =