Guardian University Guide 2017

The second big university league table of the year, the Guardian University Guide, was published today, one which the compilers say is the most student friendly,as it focuses on subject level scores in more detail, and measures things that are of importance to students. In other words, research is not a part of the table.

“The methodology focuses on subject-level league tables, ranking institutions that provide each subject area, according to their relevant statistics.

To ensure that all comparisons are as valid as possible, we ask each institution which of their students should be counted in which subject so that they will only be compared to students taking similar subjects at other universities.

Eight statistical measures are employed to approximate a university’s performance in teaching each subject. Measures relate to both input – for example, expenditure by the university on its students – and output – for example, the probability of a graduate finding a graduate-level job. The measures are knitted together to get a Guardian score, against which institutions are ranked.

A lot of emphasis is given to student experience, through the outcomes of the National Student Survey, and entry grades are dealt with twice – firstly in the details of entry tariff, and secondly in the measure of “value added”, which is an assessment of good degrees, but related to the entry grades of individual students.

The top 4 places are unchanged – Cambridge, Oxford, St Andrews and Surrey. The entrant into the top 5 is Loughborough.

The big winners this year are: Manchester Met, Northumbria City, Bradford, Anglia Ruskin, Derby, Liverpool Hope, Sunderland.

While going down are:Liverpool John Moores, Queen Margaret, Brunel, Brighton, Cumbria ,Birmingham City.

Staffordshire University have pleasingly gone up 14 places to 69th.

guardian2017

 

 

 

 

 

Normal service is resumed?

After a quiet time on the wonk front, last week saw the publication of the White Paper and two new reports on employability of STEM graduates, announcement of a Higher education bill in the Queen’s speech, and the launch of the technical consultation on the Teaching Excellence Framework, not forgetting the previous week’s plans for consulting on the future of DLHE. Anyone would think that HE wonks had been twiddling their thumbs for a while, with nothing to critique or criticise. For a really good set of resources on this, it is worth looking at WonkHE.

The White Paper contained few real surprises – changes to quality arrangements, making it easier for new entrants to the market, the introduction of a teaching excellence framework, changes to the landscape and research support – all were previously consulted on in the previous Green Paper. Overall, the sector has not been unreservedly supportive, but even with a small parliamentary majority, the bill is likely to become law, and so we need to learn how we can work as well as possible within this revised landscape.

Overall, the changes are to drive further the marketisation of higher education – no matter how we might suggest that HE does not operate as a fully open market, the government is wedded to the idea that increasing competition will drive up quality. Hence, the idea that new entrants  – “challenger” institutions will be able to provide competition to existing incumbents. Similarly, the teaching excellence framework is touted as providing more information to prospective students, hence helping them to make more informed decisions. There is, of course, little evidence that students make decisions purely on data, and for many students, there may not be a free choice of where they study, based on financial circumstances, and family or work commitments.

Nonetheless, we will have a TEF, and so it’s important to understand what will drive success in this, so that we can get the best possible outcome which reflects our performance. One piece of good news is that the government did listen to the sector in terms of timing of implementation, even if concerns about the metrics to be used fell upon stony ground.

From the technical consultation, we know that the following principles should underpin TEF:

  • keep bureaucracy and burden to a minimum
  • be voluntary, allowing providers to consider the benefits and costs of applying before deciding whether or not they wish to
  • allow for diverse forms of excellence to be identified and recognised
  • support rather than constrain creativity and innovation
  • respect institutional autonomy
  • be based on peer assessment
  • be robust and transparent
  • result in clear judgements about excellence for students, employers and other stakeholders
  • avoid driving perverse or unintended behaviours in the pursuit of demonstrating excellence
  • be sufficiently flexible to allow for further development as the TEF evolves.

From year 2 of TEF, institutions who choose to be assessed can be judged to meet one of three outcomes: Meets Expectations, Excellent or Outstanding. To get to this, we would be assessed on: teaching quality, learning environment, student outcomes and learning gain.

And the part we need to be mindful of is how this will be assessed.

Teaching quality will be based on questions 1- 9 of the National Student Survey (teaching and assessment and feedback). Learning environment will be judged on questions 10-12 of the NSS (academic support) and non-continuation data from HESA, while outcomes will be assessed by the results of DLHE.

This does look remarkably like a league table, and so institutions will work harder than ever to make sure that their NSS results and DLHE figures show outcomes in the best possible light.

In addition to the data, providers will provide a written submission of no more than 15 pages. This is where we will be able to provide more context to what we do – examples cited in the document discuss: use of student surveys, collecting and responding to module feedback, staff development activities, timeliness of feedback, use of employers on validation panels, levels of contact time and independent study.

This is going to be a lot to cover in 15 pages, so it will be key for institutions to have their policies really clearly defined in terms of how their various mechanisms work, and how they can be shown to improve student experience and outcomes.

Our recent work on changing module evaluation processes and observation of teaching, and our review of quality processes will put us in a good position to explain how we manage our academic delivery to provide the best experience for students. We will clearly need to focus more on some of our student survey scores, and get to the bottom of why we have such a wide variety of reported experiences.

Next steps for us will be:  how we review our student survey outcomes; how we deliver our new employability strategy; how we ensure that we use the information from module evaluations and teaching observations to optimise student success, and how we review the performances of all of our courses.

There will no doubt be an ongoing resistance to TEF – the metrics chosen are still not ideal, and when we move to looking at subject level analysis, then there will be concerns regarding reliability of data – but this is a system we are going to have to work with. It would make sense to make sure we are best prepared as we can be.