Do the numbers matter?

We are now at the point in the year where we start getting hold of course level metrics – from employability through DLHE, for student experience from NSS and on student performance in terms of retention and attainment through our own datasets.

Bringing these together means that we can create a snapshot of how “well” a course might have performed in the last years.

There have been a number of publications over the summer on the use of numbers and metrics, in particular the report “The Metric Tide” which reflects in the use of metrics to assess research excellence.

However this publication also contains chapters on management by metrics and on the culture of counting, and as someone who works extensively on looking at the performance of our portfolio of courses, as well as league tables, this was of interest.

“Across the higher education sector, quantitative data is now used far more widely as a management aid, reflecting developments in the private sector over recent decades. ……………….., most universities now plan resource allocation centrally, often drawing on the advice of dedicated intelligence and analysis units that gather information from departments and faculties. The use of such systems has helped universities to strengthen their reputation as responsible, well-managed institutions. The relatively robust financial position of the sector, and the continued trust placed in universities by public funders to manage their own affairs, is in part founded on such perceptions of sound financial governance.

The extent to which management systems in HEIs help or hinder institutional success is of course contested. On the positive side, such systems have helped to make decision making fairer and more transparent, and allowed institutions to tackle genuine cases of underperformance. At the same time, many within academia resist moves towards greater quantification of performance management on the grounds that these will erode academic freedoms and the traditional values of universities. There is of course a proper place for competition in academic life, but there are also growing concerns about an expansion in the number and reach of managers, and the distortions that can be created by systems of institutionalized audit.”

 

What is important then is how we deal with  data. A list of numbers alone does not create useful management information. Indeed even a collation or aggregation of all the data (similar to a league table approach) still is only one part of the picture.

What data or information such as this does provide us with, are some insights into how different parts of the university are faring, or how our different groups of students see us.

The useful work starts when we realise how to use the numbers – this is where we now have those conversations with course teams to find out why a metric is particularly high or low. Is there some really great practice that can be shared with other people? Is there a reason for a disappointing NSS score?

Only by going beyond the numbers and engaging with the course teams will we get the full insight into why the results are as they are.

This is not to say that everything can be explained away. The whole point of building up a metrics approach to assessing what we do is threefold:

  • To make sure all colleagues are aware of how measurable outcomes affect us reputationally and reflect the results and experience of actual students
  • To provide a consistent reliable management information to act as a trigger
  • To raise the data understanding capability of all groups of staff.

We should not be afraid of looking at metrics to judge a programme, but as well should become better at using that information  to be able to understand exactly why we perform that way.

As well as looking at the raw data, we also need to look closely at what it is we are trying to achieve, and how this might influence how we set up benchmarks and targets. Some examples might be:

  • Benchmarking NSS results for subjects against the sector average for that subject. This shows how well we do in comparison with others rather than a comparison against an internal university average score (guess what – half our courses were above average)
  • Considering a calculation of value added instead of good degree outcomes. For a university with a significant intake of widening participation students, this might be  a better reflection of “distance travelled” and show the results of our teaching. Any VA score should have to be different form that used in one of the league tables, which only considers 1sts and 2(i)s as a good outcome. For some students, a 2(ii) might be appropriate.

We should all be aware that using metrics to assess quality and performance is becoming increasingly important.

The current consultation from HEFCE on the future of quality assurance has a number of major themes, but two of these are around data and governance.

In the proposals are the suggestions that quality could be assured by a university identifying its own range of measures that indicate quality, and that governing bodies will be in a position to make judgements of success against these.

This could be an opportunity to create a set of metrics that really measure where we want our successes to be and that are actually aligned to the mission of the university., rather than the ones that might suit another university more readily.

Secondly, it does mean that governing bodies (and the people that brief them) will need to become more aware of data, its limitations and meanings.

Finally, and this is a concern – the proposed Teaching Excellence Framework will most likely be put in place very quickly, and will be metrics based. In the time available, this might only be based on metrics and measures that are already well known and used – NSS, DLHE, good degrees (not dissimilar to a league table so far). Since the ability to charge increased fees will depend on success in the TEF, then it does mean that despite in future possibly being able to identify what our measures of success will be, in the short term we cannot stop focussing on those key indicators.