TEF – the finish line is in sight

The finish line is now in sight, across the country policy wonks and planners are finessing their submissions for the Teaching Excellence Framework.

I’ve previously written for MediaFHE on the decision to rank providers as gold silver or bronze, and how this system could be seen to be flawed.

rankings-gold-silver-and-bronze

More recently an interesting article was published this week by Gordon McKenzie, CEO of GuildHE, who questioned the amount of predestination vs fee will in TEF.

“..this may just be the logical consequence of metrics that may be the best we have but are not a perfect proxy for teaching excellence; if the measure is inherently vulnerable then the narrative has to concentrate on shoring it up. But it is also a bit of a shame. While the specification does touch on examples of the rich activity that makes for an excellent learning environment and the highest quality teaching, I fear this richness will get squeezed out of the 15 pages to which submissions are limited and will fall victim to the need to feed the metrics. The structure of any performance assessment framework tends to shape the responses and behaviour of those being assessed. As teachers teach to the test, so providers will submit to the metrics.”

Looking at the assessment process, then the implication is that the metrics being used  – National Student Survey, DLHE and non-continuation rates, with evidence of how these are split based on student demographics – are going to be the primary determinant of a provider’s TEF outcome. The updated guidance from HEFCE (originally published in September and updated this week reinforces this:

Looking into the scoring process (section 7.10 and 7.11), then we learn that:

“A provider with three or more positive flags (either + or ++) and no negative flags (either – or – – ) should be considered initially as Gold.

A provider with two or more negative flags should be considered initially as Bronze, regardless of the number of positive flags. Given the focus of the TEF on excellence above the baseline, it would not be reasonable to assign an initial rating above Bronze to a provider that is below benchmark in two or more areas.

All other providers, including those with no flags at all, should be considered initially as Silver.

In all cases, the initial hypothesis will be subject to greater scrutiny and in the next steps, and may change in the light of additional evidence. This is particularly so for providers that have a mix of positive and negative flags.”

All providers received their illustrative metrics back in July 2016, with the note that the final versions would not vary significantly. Indeed, looking at the actual data provided this week, we can see that there has been minimal change.

So it’s like a great game of poker – no-one is revealing their hand, or saying yet how they will approach the written submission, but knowing how the metrics will heavily influence the initial assessment of gold, silver or bronze, most providers already have a pretty good idea of their likely outcome

For those providers who have the most clear-cut metrics – the gold and the bronze award winners, the results would seem to be predestined. With seemingly little opportunity for contextual explanations to change the decision of the TEF assessors, then those providers will be able to say now what they expect to score in TEF. They’ll also know in which areas they would need to improve in future, or which groups of students they might need to focus on. Those who have a mixture of good metrics and no significance flags, and perhaps only one poor score will be able to create a narrative for a silver award.

One thing we should welcome is the emphasis on different groups of students in the split metrics – use of these figures and the possible impact on the TEF rating that a university might achieve based on poor experience or outcomes for students from WP backgrounds or non-white ethnicities might act as a nudge to push the social mobility agenda that universities can influence.

It’s also worth noting in the guidance a comment on NSS scores in 7.21b

“Assessors should be careful not to overweight information coming from the NSS, which provides three separate metrics in two out of three aspects, and ensure that positive performance on these metrics is triangulated against performance against the other metrics and additional evidence. They should also bear in mind that it has been suggested that, in some cases, stretching and rigorous course design, standards and assessment (features of criterion TQ326), could adversely affect NSS scores.”

Heaven forfend that one of our “top” universities fails to do well because of a poor score for student experience.

And finally on outcomes (section 7.32):

“Should a provider include very little additional evidence in its submission, proportionately more weight will be placed on the core and split metrics in making decisions. In the extreme case where a provider submission contains no substantive additional evidence, assessors will be required to make a judgement based on the core and split metrics alone, according to the following rules:

Five or six positive flags in the core metrics for the mode of delivery in which it teaches the most students and no negative flags in either mode of delivery or split metrics confers a rating of Gold.

No flags, one, two, three or four positive flags in the core metrics for the mode of delivery in which it teaches the most students and no negative flags in either mode of delivery or split metrics confers a rating of Silver.

Any negative flags in either mode of delivery for any core or split metric confers a rating of Bronze.

If your own assessment of your score is that you pass the threshold for satisfactory quality, but that you have too many poor scores, then why would you put too much effort into the written submission – you’re going to get a bronze award anyway.

And I still don’t think that’s how medals are awarded.

Differences in Student Outcomes

Successful outcomes for students are often used as a proxy for institutional quality, hence the use of good degree outcomes, or value added, in league tables. The forthcoming Teaching Excellence Framework will almost certainly look at student outcomes as a measure also. However, not all students succeed equally, and we know from our own work at StaffsUni of the gaps in attainment between different groups of students.

The recent Green Paper, as well as highlighting the possible future TEF, indicates the government’s desire to see an increase in numbers of students from the most disadvantaged backgrounds as well as looking to ensure that all students can achieve.

In the light of this, last Monday I attended a HEFCE conference in London “Addressing differences in student outcomes: Developing strategic responses”, which looked at the findings of research into differential outcomes from Kings College London, and was an opportunity to hear from others in the sector on how they are tackling these issues.

Sessions attended were: the introduction by Chris Millward, Director of Policy at HEFCE; a presentation by Anna Mountford Zimnars of KCL;  a session by Sorana Vieru and Malia Bouattia  of NUS, and finally a session by Philip Plowden, DVC of University of Derby.

These are my notes of the day. Copies of the presentations can be viewed here.

Chris Millward HEFCE Director of Policy

Chris Milward started by considering where the government is on this agenda, linking the Green paper, the Treasury plan and plans from BIS.

Government wants to see a more diverse range of backgrounds in HE, in terms of entry, success and outcomes. For instance: double the number of students from disadvantaged backgrounds by 2020; an increase in the number of BME students by 20% by 2020, and to the sector to address differences in outcomes.

This means more responsibility for universities together with strengthened guidance to OFFA and the potential role of the Office for Students. There is an anticipated stronger role in quality assurance processes through the impact of TEF and the future need to measure difference in outcomes based on data and metrics agreed by government. This will lead to more targeted funding together with more emphasis on meeting obligations.

The HEFCE analysis shows an attainment gap for BME students, based on A-level analysis and the more that you add in other factors, the bigger the gaps become.

In addition, when looking at POLAR3 domicile, then there are further unexplained HE outcomes.

When considering students with disability, then the data suggests that those students who received DSA support perform above average, while those without perform less well.

On postgraduate progression, there is currently an unexplained difference in outcomes based on POLAR3 quintiles.

When considering employment and looking at the 40 month survey rather than the 6 month DLHE, all POLAR3 quintiles have worse outcomes than quintile 5 and for professional employment in particular. There are worse outcomes for students with disability, irrespective of DSA and there are worse employment outcomes for all categories of BME students and particularly in professional employment. Finally on gender, men perform worse overall on employment, but better in professional employment.

The HEFCE approaches to working on closing the gaps in outcomes include:

  • National outreach programme
  • Funding for disabled
  • Supporting successful outcomes
  • Catalyst fund

ANNA MOUNTFORD ZIMNARS – KCL

Dr Zimnars presented the outcomes of major piece of research into differential outcomes, which is available here.

“Access without success is no opportunity”

The research considered three questions:

  • What is the pattern- empirical?
  • How do we explain it – causal model?
  • How do we change it effectively- policy and empirical?

The question was asked – “Do we need causality- if intervention works, does the causal model matter?”

Explained pattern of differential attainment using model that looked through a lens of macro/meso/micro  levels and at experiences of preHE, HE and postHE.

4 explanatory dimensions were proposed:

  • Curricula and learning
  • Relationships -sense of belonging probably the most important factor
  • Cultural, social and economic capital
  • Psychosocial and identity factors

From the research, which involved asking questions of a large number of institutions, the level of awareness of the issue differed across institutions, although this may be changing now, possibly due to the proposals in TEF.

In terms of those institutions that tackled the differential outcomes issues the most successfully:

  • Whole institution effect is most successful
  • Need students academics and prof services working together
  • Bottom up approaches with strategic support
  • Universal and targeted interventions

Effective interventions were seen to be:

  • Improvements to T&L
  • Inclusive learning and curricula
  • Deconstructing assessment
  • Meaningful interactions
  • Role models and mentoring
  • Engagement with institution
  • Generally few evaluations especially a lack of long term evaluations

Ended with 5 groups of recommendations

  • Evidence base
  • Raising awareness
  • Embedding agenda
  • Staff as change agents
  • Students as change agents

Sorana Vieru and Malia Bouattia  NUS

 This presentation started from a previous NUS report, Race for Equality, and went on to look at a new NUS campaign on liberating the curriculum.

From previous NUS work, 42% of students said that the curriculum did not reflect their experiences particularly in history and philosophy. As well as looking at students as being in one particular demographic group, it was important to look at intersections between groups.

Work from NUS highlighted:

  • 23% of black students described learning environment as cliquey
  • Disabled students more dissatisfied in NSS
  • 10% of trans students not willing to speak up in class
  • Black students report lower levels of satisfaction on NSS on assessment and feedback

There was a focus on liberation-equality-diversity and the launch of a new campaign – “Liberate my Degree”. An online hub has been provided with resources for officers and reps with training resources to allow them to engage in debate in their institutions and to support becoming co-creators of curriculum.

Getting there  – Helen Hathaway Philip Plowden

Speakers from University of Derby showed the pragmatic steps they have taken to challenge the gap in attainment between white and BME students.

In terms of background, the University has 28000 students, most of whom were state school sector. 20% of these self-identified as BME. The attainment gap was 24.6% in 2009-10.  The impact of the work so far is the gap has closed to 12.4% in 14-15, although there was an increase in attainment across all areas this is a moving target.

Important thing is that there is no one single answer, so there was a need to stop looking and focus on the myriad interventions and see what impact they have.

  • No magic bullet
  • Post racial inclusive approach
  • Suite of different strategies needed

Four main areas of interventions are used: Relationships, academic processes, psychological processes, and social capital.

The project at Derby explored data (down to module level) and relied on the regular Programme health checks which used a digest of metrics including attainment by ethnicity. In these, the DVC meets with programme leads to engage with course teams at chalk face. Areas covered include: outcomes,  finances reliance on clearing, and staff numbers. In particular the programme health checks looked at “spiky” degree profiles- looking at individual modules and gaps, not with an intention to play a blame game but to ask what is going right and ask others to consider that.

To support interventions, Derby developed PReSS- practical recipes for student success whch contains evaluations and case studies and can be used from: Http://uodpress.wordpress.com

The key lessons learned were:

  • No simple solution. Paralysis by analysis. Just have to crack on and do what works.
  • Learn from others
  • Post racial inclusive approach. Difficult to reconcile this with some of the morning’s talk. Is this unduly dismissive of liberation approaches
  • Importance of communication -degree of profile. But once in the mainstream it might get lost.
  • Need consistent way to measure attainment gap.
  • Important to evaluate interventions.

Points from Discussions

A lively discussion followed, and the following are just snippets of some of the topics – in some cases these reflect discussion we have had in our own institution, but I add them in almost as provocations for further debate.

  • Is there a threat to academic staff when we discuss this BME and other attainment gaps? A danger of appearing accusatory?
  • Why are there difference between subjects such as business and nursing – do cohorts have an impact? Why do the subjects with the smallest attainment gaps want to engage in the debate the most?
  • How do we check who uses the resources to support inclusive learning, and should we check?
  • How do you liberate the curriculum and how do we re-educate staff to draw on a wider range of ideas, since they are a product of their own subject and environment?
  • What about the Attainment gap for students who live at home where home life and working gets in the way of study?

Conclusions

In all, a thought provoking day. A lot of emphasis, as always on the BME attainment gap, but also more opportunity to explore attainment more generally and to recognise how this agenda will become increasingly important post-TEF.

In terms of what we could do next, then as we develop better internal metrics of modules and courses, we can start to see how we can use this information to understand better the outcomes that our students achieve. Linking this to revisions in the way in which we review our courses, both from a quality assurance and enhancement perspective, as well as a more data-centric health check would provide the opportunity to have the right discussions, to ensure that we maximise the opportunities for our students to be successful.

 

Presentation to Academic Group Leaders

We regularly hold a forum at Staffordshire University for our Academic Group Leaders – these are the senior academic staff who are responsible for line managing and leading groups of academic colleagues.

This week I led the forum, with a presentation on league tables and on some of the implications of the recent Green Paper “Fulfilling our Potential. Teaching Excellence, Social Mobility and Student Choice“.

The slides can be viewed here at Slideshare

 

Politics and the TEF

Prior to the general election, I wrote a blog post reviewing the various parties’ views on HE. Following the conservative majority I wrote another piece which concluded with “What is still not clear is how universities might be regulated, how quality mechanisms will operate in future, and how the regulatory and quality regime will be changed to encompass the more diverse range of providers”

Following the various party conferences we now enter a period when we await, with bated breath, the green paper on higher education. For an insight into the Conservative conference, then I recommend “Welcome to the Northern Powerhouse of Cards” by Martin McQuillan of Kingston University

There’s little point in looking at the other parties right now – there is not likely to be an election till 2020, and Labour haven’t identified their position on fees, let alone how they will carry out the role of opposition to the green paper.

The Conservatives are in an interesting situation. Cameron as leader, who has acted as a CEO has already indicated his intention to step down. Hence for everyone else it “eyes on the prize”. As deputy CEO, Osborne has been calling the shots on HE policy, since the Treasury is dictating policy more clearly than any other department. May is setting out her stall, and showing clear opposition to overseas students which will win her no friends in universities. Boris is harrumphing around the margins, and looking more widely Hunt is exerting everyone to work harder.Meanwhile, Javid is happy to drive through large cuts at BIS, and we can expect that many of the organisations that currently work in the HE sector may cease to exist.

It’s into this environment, with his boss supporting 40% cuts to BIS, that Johnson will need to produce  a green paper and ultimately drive legislation through parliament

All of a sudden,this looks threatening to HEFCE. The HEFCE consultation on QA is in tune with government and seems to promote a move to a deregulatory ideology and imply the demise of QAA. More recently though, with questions being asked about whether the remaining amounts of funding could be administered from elsewhere, and the need for a body to run TEF, then HEFCE themselves look more vulnerable.

The Teaching Excellence Framework will clearly be a big part of the green paper. It was a commitment from Osborne (that Treasury driver again) and is detailed in the government’s productivity plan “Fixing the foundations:Creating a more prosperous nation”

Excellence in teaching
4.7 The government will introduce a new Teaching Excellence Framework to sharpen incentives for institutions to provide excellent teaching, as currently exist for research. This will improve the value for money and return on investment for both students and the government, and will contribute to aligning graduate skills and expectations with the needs of employers. The government will consult later this year on how a Teaching Excellence Framework can be developed, including outcome-focussed criteria and metrics. The Teaching Excellence Framework will inform student decision-making, continue to support a high average wage premium for graduates and ensure that students’ hard-won qualifications keep their value over time.
4.8 To support teaching excellence, the government will allow institutions offering high quality teaching to increase their tuition fees in line with inflation from 2017-18, and will consult on the mechanisms to do this. This will reward excellent institutions with higher fee income, while ensuring students get good value from the tuition loans that the government underwrites.

Johnson now needs to steer this through parliament, at the same time as BIS is facing large cuts, and he needs to produce something that will work, both as a fix in the short term, and as a longer term evaluation of teaching.

To be able to have variable fees from 2017-18 will mean measures in place during the current academic year. Inevitably this will be based on existing measures – NSS, Hesa returns, DLHE initially.

Longer term though, then a new set of measures will come in which will provide challenges to the sector, and to individual institutions. From the Times Higher Johnson has made it clear how he would like the metrics to be set up:

Widening participation and access will be intimately linked to the TEF. One of the core metrics we envisage using in the TEF will be the progress and the value add [for] students from disadvantaged backgrounds, measuring it for example in terms of their retention and completion rates. And their [universities’] success in moving students on to either further study or graduate work.

On having an impact on further marketisation, then Johnson says:

the system should “not only have the capacity for more rapid market entry, but we [should] have the capacity for more rapid market share shifts between universities than we have hitherto seen in the sector”.

and  that

he wanted a system where “market share can shift towards where teaching quality really resides. Our teaching excellence framework will be an important signal to students of where quality resides, discipline by discipline, institution by institution.”

He’s asking an awful lot from a set of metrics that are not yet defined, and that will have numerous questions raised by many in the sector.

In the meantime, what can individuals and institutions do?

Firstly there is the opportunity to respond to the government’s inquiry into assessing the quality of HE, which asks specific questions such as:

  • .What should be the objectives of a Teaching Excellence Framework (‘TEF’)?
  • What are the institutional behaviours a TEF should drive? How can a system be designed to avoid unintended consequences?
  • What should be the relationship between the TEF and fee level?

Secondly we can  start looking at the various measures of value added or learning gain for different groups of students. HEFCE are already supporting a range of projects involving over 70 institutions to look at learning gain.

One of the unintended consequences that TEF might bring about is a gaming of the system. I’m not suggesting that data returns that feed into league tables are inaccurate, but one part of a successful league table result is a set of carefully constructed data returns. It’s equally likely that it will be possible to do something similar with any TEF submission, so all institutions will learn very quickly how to report data in the best possible way

Finally, recognising that TEF will be used to drive rapid shifts in market share (a euphemism?) then we will all need to get very good, not only at supporting the widest range of students, but also at understanding how the metrics apply to us, and how we can build internal systems to replicate them.

 

 

 

Differences in Degree Outcomes

New from HEFCE this week, a report on “Differences in Degree Outcomes:the Effect of Subject and Student Characteristics“, which looks at the outcomes of students who graduated in 2013-14. Some of this data I have previously reported when looking at HESA data on the impact of ethnicity on degree outcomes for the previous year.

The results of the HEFCE survey are not startling – they almost reinforce things that we already know in terms of what factors have an impact on achievement: the challenge now is to learn how to address each of these, and with the recent comments by the new universities minister on widening participation, and our own commitment to supporting a diverse population of students then awareness of these trends and how we then tackle them will be crucial for success of individuals and of the institution.

HEFCE considered the following variables when looking at the differences in outcomes:

  • age
  • disability status
  • ethnicity
  • The Participation of Local Areas measure (important for high WP populations)
  • sex
  • subject of study
  • prior attainment (in terms of qualifications held on entry to higher education)
  • previous school type
  • institution attended

The interesting part of the analysis is not the differences in outcomes that can be seen, but how much these differences can or cannot be explained by the influence of other factors.

Subject

Certain subjects are more likely to award 1sts/2(i)s, and the table below represents those subject we offer at Staffordshire – it will be interesting to compare our recent results with those for the sector by subject.

Subject % first or upper second % first
Subjects allied to medicine 69% 24%
Biological sciences 70% 18%
Physical sciences 73% 25%
Mathematical sciences 73% 35%
Computer science 66% 28%
Engineering and technology 74% 30%
Social studies 73% 16%
Law 69% 12%
Business and administrative studies 71% 21%
Mass communication and documentation 75% 15%
Historical and philosophical studies 82% 19%
Creative arts and design 72% 21%
Education 68% 18%
Combined 60% 16%

I always thought it was apocryphal that law didn’t award firsts – across the sector it would appear to be true!

Entry Tariff

On entry tariff, there is a clear relationship – higher entry leads to higher numbers of good degrees, which can also be seen when looking at league table data. This is one of the reasons that the Guardian league table uses a “value added” measure which seeks to adjust for entry tariff..

hefce1st1

 

Mode of Study

In general, part time students have worse outcomes compared with full time. Even adjusting for variations on entry tariff, part time students have worse outcomes than full time.

Age

The raw data shows that young students are 11 percentage points more likely to gain a good degree compared with mature entrants.

Gender

Across all entry tariffs, women are more likely to gain good degrees than men.

Disability

Graduates with a disability are slightly less likely to gain a good degree than those without a declared disability.

Ethnicity

This is the area with the biggest gap. 76% of white students gain a good degree, compared to 60% of black and minority ethnic students.

Even allowing for other factors, the unexplained gap is still equivalent to 15%.

Previous School

In most cases students from state schools outperform those from independent schools.

Neighbourhood HE Participation

Students coming from neighbourhoods with the highest rates of HE participation also gain the highest numbers of good degrees.

Implications

The recent speech by Jo Johnson referred to the importance of universities in driving social mobility and the sector’s work in widening participation.

This data provides further information that could be used to justify the costs of supporting WP in universities, and for focusing on trying to close gaps in attainment.

Much focus is given to looking at the data provided by UCAS but to understand how well the sector and individual universities are performing in terms of closing these gaps, then much fuller datasets need to be considered, taking into account retention and progression and ultimately employment – even if all our students gain the degrees they deserve, but still fail to progress into appropriate graduate roles, then social mobility isn’t realisable for everyone.

As we move into a potential quality regime that could be metrics based, together with a Teaching Excellence Framework, which will certainly use a variety of metrics (possibly including learning gain), then there will be plenty of work to be done in generating data and analysing it..

However, the focus also has to go beyond analysing data. How can we use it to understand our students both as individuals and as cohorts? How can we use data to support our staff better in teaching and assessing their students? Finally, how can we learn to change practices and behaviours based on evidence?

 

 

 

Do the numbers matter?

We are now at the point in the year where we start getting hold of course level metrics – from employability through DLHE, for student experience from NSS and on student performance in terms of retention and attainment through our own datasets.

Bringing these together means that we can create a snapshot of how “well” a course might have performed in the last years.

There have been a number of publications over the summer on the use of numbers and metrics, in particular the report “The Metric Tide” which reflects in the use of metrics to assess research excellence.

However this publication also contains chapters on management by metrics and on the culture of counting, and as someone who works extensively on looking at the performance of our portfolio of courses, as well as league tables, this was of interest.

“Across the higher education sector, quantitative data is now used far more widely as a management aid, reflecting developments in the private sector over recent decades. ……………….., most universities now plan resource allocation centrally, often drawing on the advice of dedicated intelligence and analysis units that gather information from departments and faculties. The use of such systems has helped universities to strengthen their reputation as responsible, well-managed institutions. The relatively robust financial position of the sector, and the continued trust placed in universities by public funders to manage their own affairs, is in part founded on such perceptions of sound financial governance.

The extent to which management systems in HEIs help or hinder institutional success is of course contested. On the positive side, such systems have helped to make decision making fairer and more transparent, and allowed institutions to tackle genuine cases of underperformance. At the same time, many within academia resist moves towards greater quantification of performance management on the grounds that these will erode academic freedoms and the traditional values of universities. There is of course a proper place for competition in academic life, but there are also growing concerns about an expansion in the number and reach of managers, and the distortions that can be created by systems of institutionalized audit.”

 

What is important then is how we deal with  data. A list of numbers alone does not create useful management information. Indeed even a collation or aggregation of all the data (similar to a league table approach) still is only one part of the picture.

What data or information such as this does provide us with, are some insights into how different parts of the university are faring, or how our different groups of students see us.

The useful work starts when we realise how to use the numbers – this is where we now have those conversations with course teams to find out why a metric is particularly high or low. Is there some really great practice that can be shared with other people? Is there a reason for a disappointing NSS score?

Only by going beyond the numbers and engaging with the course teams will we get the full insight into why the results are as they are.

This is not to say that everything can be explained away. The whole point of building up a metrics approach to assessing what we do is threefold:

  • To make sure all colleagues are aware of how measurable outcomes affect us reputationally and reflect the results and experience of actual students
  • To provide a consistent reliable management information to act as a trigger
  • To raise the data understanding capability of all groups of staff.

We should not be afraid of looking at metrics to judge a programme, but as well should become better at using that information  to be able to understand exactly why we perform that way.

As well as looking at the raw data, we also need to look closely at what it is we are trying to achieve, and how this might influence how we set up benchmarks and targets. Some examples might be:

  • Benchmarking NSS results for subjects against the sector average for that subject. This shows how well we do in comparison with others rather than a comparison against an internal university average score (guess what – half our courses were above average)
  • Considering a calculation of value added instead of good degree outcomes. For a university with a significant intake of widening participation students, this might be  a better reflection of “distance travelled” and show the results of our teaching. Any VA score should have to be different form that used in one of the league tables, which only considers 1sts and 2(i)s as a good outcome. For some students, a 2(ii) might be appropriate.

We should all be aware that using metrics to assess quality and performance is becoming increasingly important.

The current consultation from HEFCE on the future of quality assurance has a number of major themes, but two of these are around data and governance.

In the proposals are the suggestions that quality could be assured by a university identifying its own range of measures that indicate quality, and that governing bodies will be in a position to make judgements of success against these.

This could be an opportunity to create a set of metrics that really measure where we want our successes to be and that are actually aligned to the mission of the university., rather than the ones that might suit another university more readily.

Secondly, it does mean that governing bodies (and the people that brief them) will need to become more aware of data, its limitations and meanings.

Finally, and this is a concern – the proposed Teaching Excellence Framework will most likely be put in place very quickly, and will be metrics based. In the time available, this might only be based on metrics and measures that are already well known and used – NSS, DLHE, good degrees (not dissimilar to a league table so far). Since the ability to charge increased fees will depend on success in the TEF, then it does mean that despite in future possibly being able to identify what our measures of success will be, in the short term we cannot stop focussing on those key indicators.

 

Consultation on QA Arrangements

In case you missed it last week, HEFCE have now published their consultation into arrangement for quality assurance in higher education.

Much of what appears in the document was already trailed, perhaps what is most interesting is the reaction seen across the sector since publication.

The key themes as described by HEFCE are:

  • A shift from process-driven assurance to analysis of student academic outcomes. A number of respondents to the first phase of the review wished to see this shift. It builds on existing institutional activity to drive excellence and innovation in learning and teaching in the context of an institution’s own mission, location and modes of delivery, and the nature of their student body.
  • Strengthening the existing external examining system to protect the integrity of academic standards. There was strong support in the first phase of the review for the external examining system, but recognition of the need for further modernisation and professionalisation.
  • An enhanced role for universities’ and colleges’ own assurance systems. Governing bodies would confirm that their senates or academic boards were reviewing the quality of their students’ academic experience and (for institutions with degree awarding powers) academic output standards, and provide assurance that there were appropriate action plans in place where necessary

Reading through the document, then for me, three theme become of increasing importance:

  • the increasing role of university governance
  • the need for internal data to provie assurance
  • the development of a teaching excellence framework.

One organisation who don’t get a mention at all in the document is QAA. Their response included:

‘QAA can bring extensive expertise to this debate. We will be offering ideas to shape a genuinely risk-based, proportionate approach, tailored to the track record and circumstances of each individual college or university; an approach that is truly UK-wide and underpins the reputation of UK higher education internationally”

In a speech earlier in the week by the Chief Exec of QAA, Anthony McClaren, their view was made a little more forcefully:

There are also a number of fundamental principles missing from what is being proposed.

The value of external cyclical review and the critical role it has in protecting the interests of our students, supporting providers developmentally through enhancement, providing public assurance, complying with European standards and safeguarding the global reputation of the sector. And not, as suggested in the consultation, merely a ‘repeated retesting against baseline requirements’.

Nor does it properly recognise the importance of a coherent system with a single independent quality assurance body, a single body which avoids fragmentation and weakening of the system, and enables a level playing field covering not only publicly funded universities and colleges, but also alternative providers.

Also, the retention of a UK-wide system and, critically, a single UK-wide framework as we have today, which is respected and trusted globally.

And a system which continues to meet fully now – not as an aspiration for the future – both European and wider international expectations.

And with international quality assurance activities which continue to support UK providers both in recruiting international students to this country and with their transnational education activities overseas.

QAA, working with the UK sector, is known, trusted and respected round the world as the safeguarder of quality and standards in UK higher education. Given the international objectives of the sector and also our government’s export ambitions, our work will become even more crucial in the future.

We will be responding to this consultation.

 

Million+ responded to the consultation with a piece by it’s Chair, our VC Prof Michael Gunn:

 

The consultation raises a number of complex issues and universities will wish to carefully consider their responses. However if the end result is that England loses an independent external quality assurance system there would be concerns about the impact on the reputation of UK higher education both within the UK but also overseas.

Universities UK responded with:

Effective quality assessment will continue to play a central role in securing our global reputation and providing assurances to students, the government, and the public more widely. It is important that this remains fit for purpose for the whole of the United Kingdom and in a significantly changed higher education environment, adapting to increasing diversity in students and institutions. The proposals set out by the English, Welsh and Northern Irish funding bodies pick up this challenge, setting out clear proposals for reform

In response to the various voices making themselves hear, and in particular the fact that the original document does seem to have a few bits missing, then HEFCE provided a blog article entitled “No consultation document survives first contact with its stakeholders (without the need for further elaboration)”.

Here HEFCE say:

The consultation document’s first formal engagement with the world has revealed the need for further elaboration and explanation, but the proposals themselves are holding up.  And the purpose of the consultation is to set out proposals and then to gather and test responses.  And then to think some more

Clearly this is going to be a major piece of work through the summer, not just for HEFCE, but for all relevant stakeholders.

Areas that we might want to think about are how we involve governance more centrally in assuring standards, which links to how we provide information to allow such judgments to be made.

Finally this week the new universities minister, Jo Johnson, announced  plans to create a Teaching Excellence Framework. Clearly this is links to the HEFCE consultation, and will be the next challenge for us to face. Hopefully we won’t just be replacing one review of quality that focused on process rather than outcomes,  with another for teaching that focuses on process.

 

 

 

 

It’s all about the money, money, money

This week HESA have published the latest details on expenditure by universities, with details of this for 2013-14. As an institution we have just gone through our own internal budget meetings and so it’s interesting to see how the money is spent across the sector.

hesa13-14 expenditure

(from https://www.hesa.ac.uk/pr/3561-press-release-216)

Firstly, lets just consider the size of expenditure. For 2013-14, this was £29.4bn against income of £30.7bn, up from £25.8bn against income of £26.8bn in 2009-10.

As we go into election week, this is a reminder of the size of the sector and its growing importance to the economy, as well as the non-financial benefits of higher education that accrue to both the individual and to society.

The Times Higher reports on the data, identifying that the average surplus has gone up in the last year, and that the surpluses “support the view that the sector as a whole is financially sound”.

From that article, Phil McNaull, director of finance at the University of Edinburgh and deputy chair of the British Universities Finance Directors Group says

that surpluses should not lead people to think that things were now rosy.

“People look at organisations making a surplus and they think ‘profit’; they think you’re OK,” he says. “They don’t understand that you need to make surpluses to fund the future.”

And the future does hold challenges for the sector. Chief among them is the demand for capital spending, which is already evident on a walk around most university campuses: the growth in the number of shiny new buildings reflects how improving the student experience has become a priority amid an increasingly competitive recruitment environment.

I think we are all well aware of this, and that’s why the proposed new developments for our Stoke on Trent campus, on top of the work already carried out mean that we will be able to offer a great student experience in a city centre campus.

 

Say CHEEse

Well, CHEE, anyway.

At a time when HEFCE are consulting on the future of future of quality assurance in the HE sector, and have invited views on a discussion document:

The discussion document contains questions on quality assessment that aim to stimulate wide-ranging discussion and debate on important high-level issues. Its purpose is to explore the deep, critical questions that need to be addressed before the more practical issues surrounding the design and implementation of any new quality assessment arrangements can be considered.

In the second consultation document we will set out clear options for the scope of future quality assessment activities. This will cover the way in which these are underpinned by the powers provided through the statutory and other duties of the funding bodies.

and large numbers of staff at my institution will be spending half a day grappling with the questions in the review, then Universities UK have published their own report calling for changes to HE regulation.

Key recommendations from the report include:

A new approach for protecting the student interest in the rare event of institutional or course closure
The establishment of a register of approved higher education providers, giving the current higher education register greater regulatory status
The establishment of a new Council for Higher Education for England, evolved from the Higher Education Funding Council for England (HEFCE), which would lead and coordinate mechanisms to provide assurance of quality, equity and sustainability in higher education (in addition to its funding role)
The indication that necessary changes should be made to primary legislation in order to implement the proposals in this report

Some Cheese

Some Cheese

Pam Tatlow of Million+ has commented on the report, saying:

“There are serious doubts about whether any party will consider a higher education Bill to be a high priority early in the life of the next Parliament and much more likely that improvements in regulation or changes to the fee cap will be delivered by statutory instruments rather than primary legislation. However there are risks in arguing that the regulatory role of HEFCE should be expanded.

“HEFCE remains a significant funder in terms of research as well as providing some direct grant for teaching and its funding role could be increased if fees were reduced and direct grant restored. It would be highly unusual for a regulator to have a major role as a funder. Given there are so many unknowns, proposals to extend HEFCE’s regulatory role may be premature.

“It is also difficult to understand the rationale for subsuming the Office of Fair Access into HEFCE bearing in mind the primary legislation that underpins OFFA. An independent access regulator has been supported by all of the main parties and it is unlikely that any proposal to change this will find political favour after the election.”

UUK propose that the remit of CHEE should be

Funding teaching, research and knowledge transfer
Maintaining the register of higher education providers
Applying and monitoring conditions attached to registration (including continuation of responsibilities for ensuring provision is made for assessment of quality) and applying appropriate sanctions where appropriate and necessary

Leading the coordination of higher education regulation
Working in partnership with the sector to develop mechanisms for student protection

In particular on quality assurance, the report proposes that QA:

i. be premised on co-regulation and co-ownership
ii. be responsive to the new environment, particularly the needs of students, and adopt an approach that is risk based and equitable between different providers
iii. represent value for money for its funders and keep regulatory burden to a minimum
iv. continue to form part of a UK-wide system
v. have a clear focus on academic quality assurance rather than other aspects of the full student experience
vi. have effective and appropriate governance and transparency for students and other relevant stakeholders
vii. ensure quality assurance expectations at a European level can continue to be met and the significance of transnational education recognised.

 

Th report is strongly supportive of the autonomy of higher education institutions, and in particular the principles of co- and self-regulation. While these are defended, and it is noted that equality dies not necessarily mean equity, the sector does need to remain aware of the questions being asked of it by the Competition and Markets Authority and by consumer groups such as Which?.

Clearly as the HE landscape continues to evolve and become more diverse, and in advance of a possible change of government, then reviewing how we regulate and assure HE is critical. The hope must be that any changes to QA are such that they deliver a process that supports enhancement of teaching and learning as well as research, rather than generating a quality “industry” that generates paperwork, but does little to impact on the majority of participants in HE, either staff or students.

 

 

 

 

Quality Assessment in HE

An interesting week for the wonks, as HEFCE announced that it was reviewing the arrangements for quality assurance in the sector, stating:

“UK higher education is undergoing rapid change. Our future quality assessment arrangements must continue to be internationally respected, to have the confidence of students, and to support a world-class HE sector.
We are looking to develop innovative approaches that are risk-based, proportionate, affordable, and low burden. Any new arrangements must build on established strengths and good practice, including institutions’ own robust quality assurance systems, and reflect the values and cultures of higher education. They will also need to demonstrate value for taxpayers’ and students’ money”

and after feedback from various stakeholders, will put the work out to tender.

Obviously, QAA (the current provider of Quality Assurance) had something to say:

“QAA has internationally recognised expertise in providing quality assurance and enhancement to an exceptional standard. In recent years, we have continued to adapt the quality assurance framework to meet the needs of a growing and dynamic sector, working with HE, FE and alternative providers. We look forward to continuing the development of quality assessment, protecting the public interest and supporting the UK higher education sector’s international reputation for excellence.”

Clearly assuming they’ll get the contract, or at least pointing out what a good job they have done so far.

And our university mission groups chipped in, with somewhat inevitable responses.  The Russell Group said:

“Universities with a strong track record of success which have been delivering high quality education for a long time should be subject to considerably less inspection and bureaucracy than newer institutions.

“Our universities will not flourish if they are over-regulated. Resources should be focused where problems of quality are most likely to occur.”

While million+ came up with:

“Vice-Chancellors will clearly want to be involved in this review but HEFCE needs to be careful not to throw the baby out with the bathwater. While there have been concerns about the QAA’s modus operandi, the system is certainly not broken and has the advantage of being UK-wide in scope and internationally recognised.”

I think the two mission groups said exactly what we would expect them to. I do wonder who the Russell Group are thinking of as “newer institutions”. That wouldn’t be former polytechnics with a 100 years of HE history would it? Or does it mean the real new kids on the block, the private providers and FE providers of HE?

Quality assurance arrangements may seem arcane, but all of this does matter, and as David Kernohan identifies over on his blog, it matters to every academic. Quality assurance can be seen as an evil which is inflicted upon institutions, and by institutions onto their staff, but the basic principles outlined in the QAA code of practice are eminently reasonable.

The devil of course is in the details: firstly how individual universities choose to interpret the code and operationalise quality through a series of managerial interventions, and secondly how an review of a University’s “quality” can be anything more than a review of its QA processes.

Seeking a system that is “risk-based, proportionate, affordable, and low burden” might come as pleasant music to the ears of someone buried deep in generating a monitoring report that will be read by few, and followed up by fewer, but for the burden to be reduced internally within universities, then a clear steer will be needed on what processes can be simplified and reduced (or, dare I say it  – removed). If the assessment regime is to be risk based and proportionate, then maybe institutions need to look inside to their own processes and make sure that these too are risk based and proportionate.

It would mean developing criteria to assess and score risk – but with the development of data on award performances in terms of inputs and outcomes, student evaluations, coupled with a business risk analysis (with on-campus teaching low risk, compared to overseas in a language other than English), then this should not be ijnsurmountable.

It would mean a QA system where an approach of “one size fits all” would no longer apply, but it might allow some staff to spend less time on activities that can become mindless busywork, and focus more on the real work of a university – generating knowledge and helping students to succeed.