Latest Employment Performance Indicators

This week HESA released their latest data on performance indicators for UK institutions in terns of employment, essentially the outcome of the DLHE survey for those students who graduated in 2016.

Many will look at these with increasing interest – after all this is one of the indicators used in TEF, and so anyone who might be thinking or re-applying will look closely to see if changes here put them in a potentially better place.

Equally, this data will feed through onto next year’s league tables, so again university management teams will be calculating to see if this helps them climb the greasy pole of rankings.

From HESA’s page

The proportion of full-time first degree graduates in employment and/or further study continues to show a steady rise….This year has seen a slight fall in the proportion moving into employment only, with there being a rise in the percentage going into further study.

What is interesting is to see how institutions performed against their benchmark, and also to see who has changed significantly over the last year.

Looking at the tables, most institutions are close to their benchmark, and few are flagged as having a significant difference. However, there are those who are significantly below (indicated as -) and those significantly above (+) benchmark. Looking at the gap between indicator and benchmark, and also looking at performance in the previous year, we can try to see if these are institutions where employment is either always, good, always poor, or has changed significantly in the two survey years.

Playing with the data from HESA then for employment of full time students, we can see that some unis or colleges repeatedly miss their benchmark, for instance, UCB, Bolton.

Equally, Coventry, University of Arts Bournemouth, DMU, UWL and Wolverhampton repeatedly exceed their benchmark for employment, while Staffordshire shows a big jump, from being under benchmark last year, to being significantly above this year.

With the change to Graduate Outcomes instead of institutionally managed DLHE in future, one of the key variables – the localised interpretation of the survey methodology – will be removed, and we may see some realignment of data.

The continued rise of numbers going into employment and further study, overall is to be welcomed, but maybe with two caveats. This data does not show the numbers going into graduate roles. Secondly, we have to remember that employment is only one outcome of studying for a degree.

 

 

 

 

 

TEF – the finish line is in sight

The finish line is now in sight, across the country policy wonks and planners are finessing their submissions for the Teaching Excellence Framework.

I’ve previously written for MediaFHE on the decision to rank providers as gold silver or bronze, and how this system could be seen to be flawed.

rankings-gold-silver-and-bronze

More recently an interesting article was published this week by Gordon McKenzie, CEO of GuildHE, who questioned the amount of predestination vs fee will in TEF.

“..this may just be the logical consequence of metrics that may be the best we have but are not a perfect proxy for teaching excellence; if the measure is inherently vulnerable then the narrative has to concentrate on shoring it up. But it is also a bit of a shame. While the specification does touch on examples of the rich activity that makes for an excellent learning environment and the highest quality teaching, I fear this richness will get squeezed out of the 15 pages to which submissions are limited and will fall victim to the need to feed the metrics. The structure of any performance assessment framework tends to shape the responses and behaviour of those being assessed. As teachers teach to the test, so providers will submit to the metrics.”

Looking at the assessment process, then the implication is that the metrics being used  – National Student Survey, DLHE and non-continuation rates, with evidence of how these are split based on student demographics – are going to be the primary determinant of a provider’s TEF outcome. The updated guidance from HEFCE (originally published in September and updated this week reinforces this:

Looking into the scoring process (section 7.10 and 7.11), then we learn that:

“A provider with three or more positive flags (either + or ++) and no negative flags (either – or – – ) should be considered initially as Gold.

A provider with two or more negative flags should be considered initially as Bronze, regardless of the number of positive flags. Given the focus of the TEF on excellence above the baseline, it would not be reasonable to assign an initial rating above Bronze to a provider that is below benchmark in two or more areas.

All other providers, including those with no flags at all, should be considered initially as Silver.

In all cases, the initial hypothesis will be subject to greater scrutiny and in the next steps, and may change in the light of additional evidence. This is particularly so for providers that have a mix of positive and negative flags.”

All providers received their illustrative metrics back in July 2016, with the note that the final versions would not vary significantly. Indeed, looking at the actual data provided this week, we can see that there has been minimal change.

So it’s like a great game of poker – no-one is revealing their hand, or saying yet how they will approach the written submission, but knowing how the metrics will heavily influence the initial assessment of gold, silver or bronze, most providers already have a pretty good idea of their likely outcome

For those providers who have the most clear-cut metrics – the gold and the bronze award winners, the results would seem to be predestined. With seemingly little opportunity for contextual explanations to change the decision of the TEF assessors, then those providers will be able to say now what they expect to score in TEF. They’ll also know in which areas they would need to improve in future, or which groups of students they might need to focus on. Those who have a mixture of good metrics and no significance flags, and perhaps only one poor score will be able to create a narrative for a silver award.

One thing we should welcome is the emphasis on different groups of students in the split metrics – use of these figures and the possible impact on the TEF rating that a university might achieve based on poor experience or outcomes for students from WP backgrounds or non-white ethnicities might act as a nudge to push the social mobility agenda that universities can influence.

It’s also worth noting in the guidance a comment on NSS scores in 7.21b

“Assessors should be careful not to overweight information coming from the NSS, which provides three separate metrics in two out of three aspects, and ensure that positive performance on these metrics is triangulated against performance against the other metrics and additional evidence. They should also bear in mind that it has been suggested that, in some cases, stretching and rigorous course design, standards and assessment (features of criterion TQ326), could adversely affect NSS scores.”

Heaven forfend that one of our “top” universities fails to do well because of a poor score for student experience.

And finally on outcomes (section 7.32):

“Should a provider include very little additional evidence in its submission, proportionately more weight will be placed on the core and split metrics in making decisions. In the extreme case where a provider submission contains no substantive additional evidence, assessors will be required to make a judgement based on the core and split metrics alone, according to the following rules:

Five or six positive flags in the core metrics for the mode of delivery in which it teaches the most students and no negative flags in either mode of delivery or split metrics confers a rating of Gold.

No flags, one, two, three or four positive flags in the core metrics for the mode of delivery in which it teaches the most students and no negative flags in either mode of delivery or split metrics confers a rating of Silver.

Any negative flags in either mode of delivery for any core or split metric confers a rating of Bronze.

If your own assessment of your score is that you pass the threshold for satisfactory quality, but that you have too many poor scores, then why would you put too much effort into the written submission – you’re going to get a bronze award anyway.

And I still don’t think that’s how medals are awarded.

Differences in Student Outcomes

Successful outcomes for students are often used as a proxy for institutional quality, hence the use of good degree outcomes, or value added, in league tables. The forthcoming Teaching Excellence Framework will almost certainly look at student outcomes as a measure also. However, not all students succeed equally, and we know from our own work at StaffsUni of the gaps in attainment between different groups of students.

The recent Green Paper, as well as highlighting the possible future TEF, indicates the government’s desire to see an increase in numbers of students from the most disadvantaged backgrounds as well as looking to ensure that all students can achieve.

In the light of this, last Monday I attended a HEFCE conference in London “Addressing differences in student outcomes: Developing strategic responses”, which looked at the findings of research into differential outcomes from Kings College London, and was an opportunity to hear from others in the sector on how they are tackling these issues.

Sessions attended were: the introduction by Chris Millward, Director of Policy at HEFCE; a presentation by Anna Mountford Zimnars of KCL;  a session by Sorana Vieru and Malia Bouattia  of NUS, and finally a session by Philip Plowden, DVC of University of Derby.

These are my notes of the day. Copies of the presentations can be viewed here.

Chris Millward HEFCE Director of Policy

Chris Milward started by considering where the government is on this agenda, linking the Green paper, the Treasury plan and plans from BIS.

Government wants to see a more diverse range of backgrounds in HE, in terms of entry, success and outcomes. For instance: double the number of students from disadvantaged backgrounds by 2020; an increase in the number of BME students by 20% by 2020, and to the sector to address differences in outcomes.

This means more responsibility for universities together with strengthened guidance to OFFA and the potential role of the Office for Students. There is an anticipated stronger role in quality assurance processes through the impact of TEF and the future need to measure difference in outcomes based on data and metrics agreed by government. This will lead to more targeted funding together with more emphasis on meeting obligations.

The HEFCE analysis shows an attainment gap for BME students, based on A-level analysis and the more that you add in other factors, the bigger the gaps become.

In addition, when looking at POLAR3 domicile, then there are further unexplained HE outcomes.

When considering students with disability, then the data suggests that those students who received DSA support perform above average, while those without perform less well.

On postgraduate progression, there is currently an unexplained difference in outcomes based on POLAR3 quintiles.

When considering employment and looking at the 40 month survey rather than the 6 month DLHE, all POLAR3 quintiles have worse outcomes than quintile 5 and for professional employment in particular. There are worse outcomes for students with disability, irrespective of DSA and there are worse employment outcomes for all categories of BME students and particularly in professional employment. Finally on gender, men perform worse overall on employment, but better in professional employment.

The HEFCE approaches to working on closing the gaps in outcomes include:

  • National outreach programme
  • Funding for disabled
  • Supporting successful outcomes
  • Catalyst fund

ANNA MOUNTFORD ZIMNARS – KCL

Dr Zimnars presented the outcomes of major piece of research into differential outcomes, which is available here.

“Access without success is no opportunity”

The research considered three questions:

  • What is the pattern- empirical?
  • How do we explain it – causal model?
  • How do we change it effectively- policy and empirical?

The question was asked – “Do we need causality- if intervention works, does the causal model matter?”

Explained pattern of differential attainment using model that looked through a lens of macro/meso/micro  levels and at experiences of preHE, HE and postHE.

4 explanatory dimensions were proposed:

  • Curricula and learning
  • Relationships -sense of belonging probably the most important factor
  • Cultural, social and economic capital
  • Psychosocial and identity factors

From the research, which involved asking questions of a large number of institutions, the level of awareness of the issue differed across institutions, although this may be changing now, possibly due to the proposals in TEF.

In terms of those institutions that tackled the differential outcomes issues the most successfully:

  • Whole institution effect is most successful
  • Need students academics and prof services working together
  • Bottom up approaches with strategic support
  • Universal and targeted interventions

Effective interventions were seen to be:

  • Improvements to T&L
  • Inclusive learning and curricula
  • Deconstructing assessment
  • Meaningful interactions
  • Role models and mentoring
  • Engagement with institution
  • Generally few evaluations especially a lack of long term evaluations

Ended with 5 groups of recommendations

  • Evidence base
  • Raising awareness
  • Embedding agenda
  • Staff as change agents
  • Students as change agents

Sorana Vieru and Malia Bouattia  NUS

 This presentation started from a previous NUS report, Race for Equality, and went on to look at a new NUS campaign on liberating the curriculum.

From previous NUS work, 42% of students said that the curriculum did not reflect their experiences particularly in history and philosophy. As well as looking at students as being in one particular demographic group, it was important to look at intersections between groups.

Work from NUS highlighted:

  • 23% of black students described learning environment as cliquey
  • Disabled students more dissatisfied in NSS
  • 10% of trans students not willing to speak up in class
  • Black students report lower levels of satisfaction on NSS on assessment and feedback

There was a focus on liberation-equality-diversity and the launch of a new campaign – “Liberate my Degree”. An online hub has been provided with resources for officers and reps with training resources to allow them to engage in debate in their institutions and to support becoming co-creators of curriculum.

Getting there  – Helen Hathaway Philip Plowden

Speakers from University of Derby showed the pragmatic steps they have taken to challenge the gap in attainment between white and BME students.

In terms of background, the University has 28000 students, most of whom were state school sector. 20% of these self-identified as BME. The attainment gap was 24.6% in 2009-10.  The impact of the work so far is the gap has closed to 12.4% in 14-15, although there was an increase in attainment across all areas this is a moving target.

Important thing is that there is no one single answer, so there was a need to stop looking and focus on the myriad interventions and see what impact they have.

  • No magic bullet
  • Post racial inclusive approach
  • Suite of different strategies needed

Four main areas of interventions are used: Relationships, academic processes, psychological processes, and social capital.

The project at Derby explored data (down to module level) and relied on the regular Programme health checks which used a digest of metrics including attainment by ethnicity. In these, the DVC meets with programme leads to engage with course teams at chalk face. Areas covered include: outcomes,  finances reliance on clearing, and staff numbers. In particular the programme health checks looked at “spiky” degree profiles- looking at individual modules and gaps, not with an intention to play a blame game but to ask what is going right and ask others to consider that.

To support interventions, Derby developed PReSS- practical recipes for student success whch contains evaluations and case studies and can be used from: Http://uodpress.wordpress.com

The key lessons learned were:

  • No simple solution. Paralysis by analysis. Just have to crack on and do what works.
  • Learn from others
  • Post racial inclusive approach. Difficult to reconcile this with some of the morning’s talk. Is this unduly dismissive of liberation approaches
  • Importance of communication -degree of profile. But once in the mainstream it might get lost.
  • Need consistent way to measure attainment gap.
  • Important to evaluate interventions.

Points from Discussions

A lively discussion followed, and the following are just snippets of some of the topics – in some cases these reflect discussion we have had in our own institution, but I add them in almost as provocations for further debate.

  • Is there a threat to academic staff when we discuss this BME and other attainment gaps? A danger of appearing accusatory?
  • Why are there difference between subjects such as business and nursing – do cohorts have an impact? Why do the subjects with the smallest attainment gaps want to engage in the debate the most?
  • How do we check who uses the resources to support inclusive learning, and should we check?
  • How do you liberate the curriculum and how do we re-educate staff to draw on a wider range of ideas, since they are a product of their own subject and environment?
  • What about the Attainment gap for students who live at home where home life and working gets in the way of study?

Conclusions

In all, a thought provoking day. A lot of emphasis, as always on the BME attainment gap, but also more opportunity to explore attainment more generally and to recognise how this agenda will become increasingly important post-TEF.

In terms of what we could do next, then as we develop better internal metrics of modules and courses, we can start to see how we can use this information to understand better the outcomes that our students achieve. Linking this to revisions in the way in which we review our courses, both from a quality assurance and enhancement perspective, as well as a more data-centric health check would provide the opportunity to have the right discussions, to ensure that we maximise the opportunities for our students to be successful.

 

Latest WP Data

The latest data on widening participation have been published by HESA.

The latest statistics show that of all UK domiciled, young, full-time, first degree entrants in 2014/15:

  • 89.8% were from state schools. Two thirds of HE providers had over 90% state schools entrants.
  • 33.0% were from NS-SEC classes 4-7. This proportion varied from 10.0% to 58.3% across HE providers.
  • 11.4% were from low-participation neighbourhoods.

Interestingly when we look at our own university’s performance against WP by digging into the data tables, we see:

  • we recruit 99.3% of our students from state schools or colleges, against a benchmark of 96.1%
  • we recruit 47.6% of our students from SEC classes, 4,5,6 and 7 against a benchmark of 42,2%
  • 23.3% of our students come from low participation neighbourhoods, against a benchamrk of 15.1%

Looking at the raw figures would appear to show that we do a good job in recruiting and supporting WP students.

For me the questions would be though – how could we support these students better? Are there ways in which by knowing the background of our students we could tailor our personal tutoring processes Are there ways in which we need to provide additional study skills support to allow students to maximise success and minimise the chance of failure or withdrawal? Are there ways in which we could help students develop the necessary social and cultural capital during their time at university, to be able to maximise their opportunities when entering employment?

Some of these are easier to solve than others.As we begin to expose better information to personal tutors, we will be able to provide more of teh personalised support necessary. Linking this to a technology based approach that could predict the necessary interventions would be the next step. The most tricky one will be that of developing the cultural capital needed to succeed. This is where know that the advantages that come from a non-WP background play out. We’ve tried to emphasise the need for this in our current Learning and Teaching Strategy. The tricky bit is going to be about the implementation, while not making value judgements about the relative worth of different sets of cultural or social mores.

This last one counts as a wicked problem!

 

 

HESA Data Release

HESA have just published their statistical first release for student enrolments and qualifications obtained at Higher Education providers in the United Kingdom 2014/15.

This is always a useful summary, to see the size of the HE “market”, and whch subjects appear to be growing or in decline, data which of course can be cross-referenced to UCAS data releases to to see how trends in applications map to trends in enrolments.

The headline data shows nothing new – the total number of students engaged in HE study dropped by 2%, largely due to the 6% drop on part time enrolments. Part time still continues to be a problematic area for the sector.

hesa1415_chart_1

In terms of subjects, we can see how individual subject areas are growing or in decline, which should influence the way in which institutions might want to proactively manage their portfolio.

The latest information shows that the areas of growth for undergraduate study are: biological science, computer science, subjects related to agriculture, engineering and technology, with the biggest gain in creative arts and design. On the other hand, there has been a sector wide drop in enrolments at undergraduate level again in languages, but also in business, law, history and philosophy, and education.

hesa1415_chart_5

On attainment, and an area of interest in light of comments on possible grade inflation in the recent discussions around the Green Paper, HESA note that “of those gaining a classified first degree, the proportion who obtained a first or upper second has shown a steady increase from 64% in 2010/11 to 72% in 2014/15. In 2014/15, 22% gained a first class degree compared to 15% in 2010/11.”. This steady rise will be reflected in league tables of course, but importantly for my own institution, our good degree rate has risen (not to the sector average), but to a defensible level.

Looking at data n where students come from, we can see that the UK is still a desirable location for HE study. Considering English HEIs only, the data shows:

hesa14-15 domicile

Not surprisingly we see that China remains the biggest provider of students to English HEIs, and continuing drop in students from India, Pakistan and Saudi Arabia, while there has been a big rise in students from Hong Kong.

As always the HESA data release provides excellent background information for anyone wanting an understanding of the shape of the UK HE sector, and where the trends are in types of students, their level and mode of study, their domicile, their outcomes and the attractiveness of the various subject groups.