BME Attainment 2015-16

I’ve written on this blog many times before on the challenge of differential attainment in universities. Full disclosure – I no longer work in the sector, I’m white, middle class and educated at what is now a Russell Group university. This could be relevant,  it may contextualise my  opinions.

According to the recent report from Universities UK “Patterns and trends in UK higher education 2017“, then once again we can see data on degree attainment split into a crude ethnicity breakdown of “white”, and “BME”. the supplementary data provided add “other and don’t know”.

Plotting the data shows:

while the figures provided are:

showing that the attainment gap between white and BME students stubbornly remains at about 15% when we consider “good degrees” to be a 1st or 2(i).

Equally telling however, is the wide differential in numbers of first awarded to different groups, and the fact that BME students are more than twice as likely as their white counterparts to be awarded a third.

Clearly, a lower degree classification affects life chances in terms of potential graduate employment.

UUK highlight that

“Although part of this gap will be due to differences in entry qualifications, prior attainment and subjects taken across the two groups, HEFCE has noted that for UK-domiciled, first degree graduates at English higher education institutions, even when these factors are considered, there is still a significant gap between the proportions of white and BME students obtaining a first or upper second class degree. Further HEFCE research has also identified potential explanatory factors for this difference, which include curricula and learning (including teaching and assessment practices), relationships between staff and students, social, cultural and economic capital, and psychosocial and identity factors.”

Data and evidence have long been available to show that for given entry characteristics, BME students are less likely to obtain a good degree classification than their counterparts, so we should be asking where we have to look to understand why our universities do not appear to be operating as the progressive liberal meritocracies they claim to be.

Kalwant Bhopal in “Addressing racial inequalities in higher education:
equity, inclusion and social justice” explores how  inequalities in access to elite universities continue to exist for those from black and minority ethnic backgrounds and suggests that gaining a place at an elite university is related to access to social and cultural capital. Referencing  “The Diversity Bargain: And Other Dilemmas of Race, Admissions and Meritocracy at Elite Universities” ,by Natasha K. Warikoo, Professor Bhopal identifies themes that, for me, have resonance with Reni Eddo Lodge’s recent book “Why I am no longer talking to white people about race“.

Immediately striking from a reading of these two works is the critique of meritocracy that we have clung to. Once we recognise the impact of power, then as Bhopal writes “It can be argued that the concept of meritocracy as one that is used to reproduce and legitimise class privilege or indeed a system that enables those already in positions of power to maintain their elite position and ensure that it is passed down from one generation to the next.”

Reni Eddo Lodge defines what she calls structural racism: “This is what structural racism looks like. It is not about personal prejudice, but the collective effect of bias. it is the kind of racism that has the power to drastically impact people’s life chances. Highly educated, high earning white men are very likely to be landlords, bosses, CEOs, head teachers or university vice chancellors”. She places this definition firmly in a reading of British history, and again shows that belief in the meritocracy satisfies those who currently benefit from it.

Bhopal states:

“Universities must listen to and address the challenges that black and minority ethnic students face in higher education. There is ample evidence to suggest that black and minority ethnic students’ experience disadvantages at different stages; from admissions, their experience, whilst at university and in the class of degree they are awarded. However, few universities have policies and strategies in place to address these disadvantages. Universities must address the racism that takes place in their institutions which exists as part of the social structure of their organizations, and move away from a deficit focus which blames individuals rather than examining the institutional racism that forms part of the structures of higher education.”

Bhopal proposes mandatory unconscious bias training for staff in universities as well as identifying the need to develop social and cultural capital in good schools prior to university.

I think there are other questions to be asked, and suggest the following

Unconscious bias training is a good start, but I have heard it described by Gurnam Singh of Coventry University that a limitation is that “it shows you’re a little bit racist, but that’s ok because it’s unconscious”. Unconscious bias assessment and training is just one step on a journey – it’s what you do next with that knowledge so how is this followed up?.

Also (and from a data nerd you know this is coming) the HESA datasets can be mined to provide much more information. Institutions can identify how they perform individually against this national average – by definition, some will perform better and some worse) and combine with their own internal data. Does this tell you anything about differential outcomes by discipline, by department? In fact, are you even aware of how students of different ethnicity are distributed across your university, as they are almost certainly not evenly distributed?

A further issue is the feeling of belonging, and the right to belong. If as Bhopal states universities “maintain their status by representing themselves as white and middle class, spaces reserved for those who are just like them”, then it would be useful to explore the extent to which staffing (particularly academic, professional and management) reflects the make up of the student body. If all your lecturers are white and middle class, then this might be sending a strong signal about who university is for. Once again, the HESA data can be used to look at staff profiles, and it would be an interesting exercise to look for any correlation between staffing profile, student profile and degree outcomes.

Finally, what happens on graduation day? Let’s say you have a department where 50% or more of your students are from a BME background. What did the academic procession look like, especially the senior staff? Similarly, of the recipients of honorary degrees, how many of them are from the same background and are representative and aspirational role models for your students?

I know I don’t have the answers, I’m just hoping to add some more questions to help understand, and to shine a light on areas where changes could be made.

ADDENDUM

Only moments after publishing this, my attention was drawn to a new publication on the Wonkhe website, which provides the HESA data on degree attainment, and the gaps, by institution. You might want to wander over there and read Nona Buckley-Irvine’s piece “Universities’ shame – unpicking the black attainment gap“.

 

Latest Employment Performance Indicators

This week HESA released their latest data on performance indicators for UK institutions in terns of employment, essentially the outcome of the DLHE survey for those students who graduated in 2016.

Many will look at these with increasing interest – after all this is one of the indicators used in TEF, and so anyone who might be thinking or re-applying will look closely to see if changes here put them in a potentially better place.

Equally, this data will feed through onto next year’s league tables, so again university management teams will be calculating to see if this helps them climb the greasy pole of rankings.

From HESA’s page

The proportion of full-time first degree graduates in employment and/or further study continues to show a steady rise….This year has seen a slight fall in the proportion moving into employment only, with there being a rise in the percentage going into further study.

What is interesting is to see how institutions performed against their benchmark, and also to see who has changed significantly over the last year.

Looking at the tables, most institutions are close to their benchmark, and few are flagged as having a significant difference. However, there are those who are significantly below (indicated as -) and those significantly above (+) benchmark. Looking at the gap between indicator and benchmark, and also looking at performance in the previous year, we can try to see if these are institutions where employment is either always, good, always poor, or has changed significantly in the two survey years.

Playing with the data from HESA then for employment of full time students, we can see that some unis or colleges repeatedly miss their benchmark, for instance, UCB, Bolton.

Equally, Coventry, University of Arts Bournemouth, DMU, UWL and Wolverhampton repeatedly exceed their benchmark for employment, while Staffordshire shows a big jump, from being under benchmark last year, to being significantly above this year.

With the change to Graduate Outcomes instead of institutionally managed DLHE in future, one of the key variables – the localised interpretation of the survey methodology – will be removed, and we may see some realignment of data.

The continued rise of numbers going into employment and further study, overall is to be welcomed, but maybe with two caveats. This data does not show the numbers going into graduate roles. Secondly, we have to remember that employment is only one outcome of studying for a degree.

 

 

 

 

 

Reflections on TEF

It’s been almost a week since TEF results were made public, and some of the predictable coverage, posturing, and agonising have occurred. Here are a few of my thoughts.

The importance of the written submission

In advance, we were all told that the written submission mattered, but at the same time, that the initial hypothesis that would be based purely on metrics was felt to be the factor that would determine classification. Looking at the result then plenty of universities have been awarded a TEF rating higher than their initial metrics would suggest. (This is personally pleasing since I wrote a significant amount of my previous employer’s submisison). The commentary provided by the TEF panel on each submission makes it clear that a written submission that demonstrated that an institution understood why it it missed benchmarks, could explain this in terms of contextual data, and show that activity was taking place to remediate the situation, then the higher award was possible.

The press didn’t understand what was being measured.

In advance of publication I was asked on Twitter whether anyone outside the sector was going to be interested in the results. Inevitably those papers who have a vested interest (by publishing their own university guides) or who have a reputation for being a TEF booster ( I’m looking at you here, The Times), were always going to publish something.

We inevitably saw articles reminding us that Southampton, LSE and Liverpool of the Russell Group had not performed as expected, and this this showed the shake up in the sector. Equally, there was criticism that the expected ranking or established order was not being replicated.

Any paper that publishes its own league table is going to be concerned if another form of ranking does not tally with their figures. But this is to misunderstand what TEF is – it’s about measures against benchmarks, not absolute performance, hence the difficulty for some unis in scoring above already high benchmarks, and for the press to create a simple story from a more complex narrative.

Universities love to celebrate

There was plenty of gold across those who felt they’d done well! This despite the rumblings and complaints in advance that the idea of three levels of ranking, like medals, was reductive and couldn’t possibly communicate the complexity of what a university does

How much does it matter to the sector?

TEF clearly matters to those in the sector, and will have implications for behaviours in the future. Universities already work hard to make sure that they optimise their data returns to HESA, that they get good scores in NSS by promoting and managing survey completion, and getting good scores on DLHE by managing those returns.

In future, these activities might drive performance management behaviours in universities even more than at present, with possible unforeseen consequences – courses and subject areas that perform poorly on a key metric may not longer be considered as viable, especially while TEF continues to be at institutional level.

For planning departments, then we can expect to see ever more sophisticated models of academic portfolio performance, and increased scrutiny of data returns.

((From the Modern Toss Work postcard set: http://ow.ly/hFV530cT60U )

The impact on fees has been temporarily removed, and with possible changes to funding in future (let’s face it, HE funding is back on the agenda after the recent General Election), then TEF as an instrument of marketisation through differential fees loses its power.

How much does it matter to the press?

For those in the press, TEF might just be a way to get easy headlines about perceived poor performance of established universities, while expressing shock at the performance of some FE colleges.

For the specialist press, commentariat and twitterati, TEF is a gift – something for the wonks to pore over and luxuriate in, in that quiet period at the end of an academic year.

How much does it matter to the punters?

For parents and potential students, TEF is just one more set of information to use, and has to be added to existing marketing collateral, multiple league tables, and guidance from schools and colleges. Without a clear explanation of what i being measured (particularly the issue of relative performance rather than absolute) then it’s not a straightforward measure, but just one more to add to the mix. Coupled with the Guardian University Guide concept of “value added” then it’s hardly surprising that potential students aren’t always clear about what might be on offer.

Finally, TEF may just be ignored if it does not provide the confirmation bias that people often use on making these kind of decisions. For example, I have a son who wants to study History in a year’s time. Both Staffordshire and Durham scored Silver. But I’m only going to recommend one of those.

You can bet though, that universities will shout about their TEF outcome (provided it was good) at this summer’s open days.

A New Home

My previous blog has now been migrated to this site (actually this has been a mirror site for quite a long time, but never used beyond that).

It’s time to start writing new content, and building up stats again – but just to show how much the old blog was read, here’s a peek at the final usage stats:

 

And that “best ever” – that was on the day I wrote about the Guardian University Guide in 2014. In fact most of the high traffic posts have been about league tables, although one of the latest posts on “Does UK HE have a Retention Problem” has been pretty popular.

Does UK HE have a retention problem?

Last night I attended an event at King’s College London, hosted by UPP Foundation and Wonkhe, looking at retention issues in UK higher education. The format was a series of initial thoughts from each of 5 panel members, followed by a lively discussion, showing the importance of this topic.

wonkheupp

Richard Brabner

Richard introduced this as the second of three workshops on student journey. He pointed out that  HESA stats on  non-continuation show that this is getting worse, and especially for students from disadvantaged backgrounds. he reminded the audience that in light of this Les Ebdon of OFFA expects next access agreements to focus on retention.

Liz Thomas

Liz stared by explaining that UK figures for retention are in fact much better than most European countries. In those countries with free tuition, then there was a feeling that getting students out of the system was part of the quality system. In awold domintae dby fees and student loans, then this attitude cannot prevail. We admit students to our courses and so we have obligation to help them succeed. So we do have an issue around student success and retention, in particular around differential levels of success, retention and  plus employment outcomes when we consider BME, WP and other factors.

From the HEA/Paul Hamlyn What Works  project it was clear that learning and teaching is critical to student success and retention by building a sense of belonging in the academic sphere. This goes beyond curriculum, but is about the whole institution recognising that it needs to make students successful, and needs to consider role and contribution of all staff.

Sorana Viera

Sorana of the NUS believes that the UK HE does have a retention problem for some groups of students and suggested that an unforeseen consequence of TEF is that game-playing to satisfy the metrics could exacerbate the situation.  The NUS view was that the rushed nature of TEF potentially leaves dangerous holes. Since the key metrics that universities can impact is non continuation then all eyes should be on retention.
Universities should  invest more in those supporting activities that are evidence based, and Soranna cited the What Works project as an example of this. If evidence is presented in accessible ways, then NUS will champion it.

In particular, the impact for commuting students was raised – these are students with financial pressures, family and work commitments, who may have chosen to study at a local university which may not be the right university for them.

Alex Proudfoot

Alex showed that some of the issues for alternative providers are quite different. Students are much more likely to be from a  BME background, or be aged over 30, so these providers are dealing with very different cohorts of students.

A focus for alternative providers was on delivering courses that focus on employability by creating industry links and ultimately an industry community within the college where staff and students might collaborate on projects outside of class.

In terms of pathways and transitions into HE, students who go through the same provider from level 2 and 3 have better retention at HE levels.

For students with low entry qualifications, then classes on study skills are a compulsory part of curriculum, rather than be in an additional optional choice for the student

Ross Renton
Ross highlighted the huge differences in retention and success based on ethnicity. he emphasised the need to develop an understanding who is joining your university or couerse, and developing a relationship with them before they arrive or join the course.

At Hertfordshire they had previously targeted POLAR quintile 1&2 students on entry, and provided peer mentoring plus other additional activity,tailored to each student. Retention figures improved by 43% for these students, and DLHE shows better rate of graduate employment. This intensive personalisation works but is expensive

Ross also highlighted the fact that problems need to be owned by everyone – it’s not a matter of sending a student off to some student hub, but all academic staff need to take ownership. There is also a need to systemise personal tutoring, so that key and meaningful conversations take place at the right times for all students, including at all transition periods, long holidays etc.

In the future Ross saw some risk in being overly focused on the use of metrics and analytics – this is still about people working with people.

Panel Q&A

Key points in the Q&A session were around:

  • How do we support hourly paid lecturers- not delivering HE on the cheap, but supporting the right staff properly
  • The current retention metrics don’t allow for students to step out of HE with interim quals in a flexible framework
  • Staff also need to feel that they belong, so need to consider institutional culture.
    How do you support students through whole institution approach.
  • How can we build success in L&T including retention and success into reward and recognition for staff?
  • How do we making the campus more “sticky” for students living at home? The research on commuting students suggests that these students feel the campus is not for them and they feel marginalised and invisible. Details in prospectus will cover accommodation but not local travel. Universities were often not set up to  support these students, expecting them to be in 4-5 days a week.
  • Tax burden for those who drop out but have student debt – ethics and who should pay? 1 yr of study should be seen as a success
  • Can we use analytics to create better informed interventions as otherwise it is difficult to personalise in mass system without good real time information.

Takeaways

Certain key factors stand out:

  • The need to look carefully at differential retention and success, and to ensure that TEF does not drive perverse behaviours
  • The opportunities to use better analytics to personalise student support
  • The need for rigorous and meaningful personal tutor systems
  • A pressing need to understand how a sticky campus can support commuting students and meeting their specific needs.

 

EdTech futures in the Connected University

Digital technology is bringing huge changes to all industries and sectors, not least higher education. It isn’t the future, it’s the present. This article summarises three recent publications, firstly the annual NMC Horizon report that I’ve previously blogged on here; a talk by Steve Wheeler, the keynote speaker at last years Learning’s and Teaching Conference, and finally a piece by Eric Stoller, who will be delivering a keynote at this year’s conference.

Firstly let’s look at this year’s NMC Horizon report. This is categorised into:

  • Key Trends Accelerating Higher Education Technology AdoptionNMC 2017-1
  • Significant Challenges Impeding Higher Education Technology AdoptionNMC2017-2
  • Important Developments in Technology for Higher EducationNMC2017-3

Usefully NMC have provided a summary of their predictions from previous years, and it’s worth noting that not all of their predictions come to pass; equally some remain on the radar for a number of years. Audrey Watters has previously provided a critique of NMC for those who’d like a different view.

Nonetheless, this is a useful starting point, and we can map our own activities against all of  the 18 trends/challenges/developments, but here I’ll focus on a few.

As we walk around this campus (and many others in the UK), we can see how learning spaces are being transformed to allow different ways of learning to take place.

We have a major focus on improving staff and student digital capabilities, recognising that this will help drive innovation, as well as improve employability prospects of our graduates.

The achievement gap is one I have blogged about previously – this continues to be a difficult multi faceted probelm. Technology will not provide all the answers, but may help level the playing field in some areas.

The possibility of a very different LMS in the future is tantalising. We know that current systems such as BlackBoard and Canvas are very good at managing learners and resources – making sure the right information is provided to the right people at the right time. Changes to the way in which staff and students collaborate through co-creation and sharing could render this form of LMS redundant in future.

Away from the NMC report, Steve Wheeler of Plymouth University presented on what’s hot and what’s not in learning technology. The video is well worth watching.

Steve identifies a huge range of technologies that will likely have an impact: voice controlled interfaces; gestural computing, the Internet of Things (pervasive computing); wearable technologies;artificial intelligence; touch surfaces for multitouch multiusers; wearable tech; virtual presence; immersive tech such as Oculus rift for VR and AR; 3D printers and maker spaces. The list goes on.

Steve identified three key elements for the future:

  • Very social
  • Very personal
  • Very mobile

and this needs to be underpinned with developing digital literacy, particularly when wading through alt-facts and fake news. Our students need to learn how to check the veracity and relevance of materials.

Steve postulates that until the development of the PC or web, everything was teacher centred. Technology allows us to become learner-centred, but have we adjusted enough to being learner led?

This should impact the way in which we assess- education and training must go from recursive to discursive, no longer repeating or regurgitating materials from the teacher, but through a  discursive approach developing problem solving skills etc.

  • The changes are
  • Analogue to digital
  • Closed to open
  • Tehthered to mobile
  • Standardised to personalised
  • Isolated to connected

 

Finally, a new blog post from Eric Stoller looks at “Student Success, Retention, and Employability – Getting Digital in a High Tech, High Touch Environment”.

Eric identifies that the more engaged a student is during their university experience, the more successful they will be. Digital offers us the opportunity to increase the channels through which we communicate with and engage with our students.

Eric (as well as Steve above, and the NMC report) highlights the importance of digital capability, particularly through the lens of employability. Students need to graduate with the digital skills they will use in the workplace, not just those that they use to complete a university course. Interestingly Eric also highlights the need to teach students about their digital presence and identity.

Finally he refers to the existence of a digital divide (again identified by NMC as digital equity) – “If your university is students first, that means all students”. This a a challenge that focusing on providing the right kit, but more importantly developing the right skills an behaviours means that we can get all staff and students to engage in a connected digital future.

Last year we enjoyed Steve Wheeler’s presentation at our Learning and Teaching Conference – I can’t wait to hear Eric Stoller later this year at the same event.

 

 

 

My Social Media Profile

As a university we are committed to becoming the Connected University, and are making great strides in changing our approach to learning and teaching, to our campus transformation and to the way in which we run the business, all enabled by digital tools and technologies.

On an individual level, we can reasonably expect colleagues to embrace aspects of digital technology to enhance their work, to change the way in which they communicate with each other, with our students and with other stakeholders.

When we look at the amount of content being created, and the amount of communication taking place in just one minute, we can’t avoid being engaged with social media:

16_domo_data-never-sleeps-4

(from https://www.domo.com/blog/data-never-sleeps-4-0/) 

At last year’s Learning and Teaching Conference, we asked attendees to make a pledge of what they might do differenlty, based on what they were taking away from the conference. On reviewing these, it was clear that lots of colleagues wanted to dip their toe into the world of social media, or if they were already using such tools, explore and expand further their use.

This short article is a reflection of how I use social media. I’m not suggesting this is the only way, and I’m sure I can identify gaps in my own practice.

As a starting point, it’s worth looking at the work of David White, who proposes that the term “digital native” has had its day, and that we shouldn’t decide on a person’s digital literacy based solely on age, but in terns of how comfortable they are in using technology. White’s model of looking at digital residents vs visitors is a useful starting point for assessing our own digital skills (in addition to the various diagnostic tests we can use).

mgh resident visitor

Through this approach I can map my own own digital profile, which in itself raises a number of questions: where do I live in the digital world? Can I be found? Can I be found in multiple channels? How do I manage a level of authenticity? How do I moderate my voice between different channels and different audiences?

My social media profile then is primarily found in:

  • Twitter
  • Facebook
  • Strava
  • Flickr
  • WordPress

Twitter is my most work-related tool, although not everything posted here is work-related. As part of building an authentic voice, it’s important to reveal enough of yourself as a person, your other and commitments, to allow followers to gain a greater insight into you. For example, following a recent accident, the message on Twitter from a nationally known HE commentator was simply “How’s the bike?”.

Through Twitter, I’ve developed a really useful network outside the University, often with people who are influential in the sector, but who I wouldn’t meet otherwise. It means that attending meeting across the country, more often than not, you already know a lot about the people you will be meeting. And last year’s keynote speakers for our L&T conference as well as this year’s came from people I’d got to know through Twitter.

We all know of the danger of social media becoming an echo chamber – it’s good to follow people who you don’t agree with on all things, otherwise we are missing the benefits of academic debate.

Facebook for me is purely social. I do follow feeds from the University and from various schools an departments. My posts here are almost never work related and hopefully the privacy settings are such that I can maintain a more private profile here, which focuses on family, friends and hobbies.

Strava i is totally social – only look at this is you want to know how far and how slowly I ride a bike.

Flickr is for serious photography – quick snaps may appear on Facebook or Strava, anything that require any amount of editing will end up on Flickr.

WordPress is the software that powers many of the world’s blogs. This blog itself is a WordPress installation on the university system. I have a second site  as a backup, and where I can experiment with some additional WordPress tools and integrations. I’ve written before about why I write a blog – it provides a means to communicate in longer form than Twitter, and to provide my personal analysis of changes in the HE sector, both for internal and external consumption

There are a whole load of tools I don’t use – Snapchat and Instagram come to mind immediately. If nothing else, I’m not a great fan of the #artificialhashtag. However, institutionally we do need to be on top of these – these are the tools our students are using.

Finally I’ve mapped a number of other tools – WhatsApp, Skype for Business, FaceTime, FB Messenger – these are my comms channels in addition to my 2 email accounts.

There’s a lot to keep on top of!

 

 

TEF – the finish line is in sight

The finish line is now in sight, across the country policy wonks and planners are finessing their submissions for the Teaching Excellence Framework.

I’ve previously written for MediaFHE on the decision to rank providers as gold silver or bronze, and how this system could be seen to be flawed.

rankings-gold-silver-and-bronze

More recently an interesting article was published this week by Gordon McKenzie, CEO of GuildHE, who questioned the amount of predestination vs fee will in TEF.

“..this may just be the logical consequence of metrics that may be the best we have but are not a perfect proxy for teaching excellence; if the measure is inherently vulnerable then the narrative has to concentrate on shoring it up. But it is also a bit of a shame. While the specification does touch on examples of the rich activity that makes for an excellent learning environment and the highest quality teaching, I fear this richness will get squeezed out of the 15 pages to which submissions are limited and will fall victim to the need to feed the metrics. The structure of any performance assessment framework tends to shape the responses and behaviour of those being assessed. As teachers teach to the test, so providers will submit to the metrics.”

Looking at the assessment process, then the implication is that the metrics being used  – National Student Survey, DLHE and non-continuation rates, with evidence of how these are split based on student demographics – are going to be the primary determinant of a provider’s TEF outcome. The updated guidance from HEFCE (originally published in September and updated this week reinforces this:

Looking into the scoring process (section 7.10 and 7.11), then we learn that:

“A provider with three or more positive flags (either + or ++) and no negative flags (either – or – – ) should be considered initially as Gold.

A provider with two or more negative flags should be considered initially as Bronze, regardless of the number of positive flags. Given the focus of the TEF on excellence above the baseline, it would not be reasonable to assign an initial rating above Bronze to a provider that is below benchmark in two or more areas.

All other providers, including those with no flags at all, should be considered initially as Silver.

In all cases, the initial hypothesis will be subject to greater scrutiny and in the next steps, and may change in the light of additional evidence. This is particularly so for providers that have a mix of positive and negative flags.”

All providers received their illustrative metrics back in July 2016, with the note that the final versions would not vary significantly. Indeed, looking at the actual data provided this week, we can see that there has been minimal change.

So it’s like a great game of poker – no-one is revealing their hand, or saying yet how they will approach the written submission, but knowing how the metrics will heavily influence the initial assessment of gold, silver or bronze, most providers already have a pretty good idea of their likely outcome

For those providers who have the most clear-cut metrics – the gold and the bronze award winners, the results would seem to be predestined. With seemingly little opportunity for contextual explanations to change the decision of the TEF assessors, then those providers will be able to say now what they expect to score in TEF. They’ll also know in which areas they would need to improve in future, or which groups of students they might need to focus on. Those who have a mixture of good metrics and no significance flags, and perhaps only one poor score will be able to create a narrative for a silver award.

One thing we should welcome is the emphasis on different groups of students in the split metrics – use of these figures and the possible impact on the TEF rating that a university might achieve based on poor experience or outcomes for students from WP backgrounds or non-white ethnicities might act as a nudge to push the social mobility agenda that universities can influence.

It’s also worth noting in the guidance a comment on NSS scores in 7.21b

“Assessors should be careful not to overweight information coming from the NSS, which provides three separate metrics in two out of three aspects, and ensure that positive performance on these metrics is triangulated against performance against the other metrics and additional evidence. They should also bear in mind that it has been suggested that, in some cases, stretching and rigorous course design, standards and assessment (features of criterion TQ326), could adversely affect NSS scores.”

Heaven forfend that one of our “top” universities fails to do well because of a poor score for student experience.

And finally on outcomes (section 7.32):

“Should a provider include very little additional evidence in its submission, proportionately more weight will be placed on the core and split metrics in making decisions. In the extreme case where a provider submission contains no substantive additional evidence, assessors will be required to make a judgement based on the core and split metrics alone, according to the following rules:

Five or six positive flags in the core metrics for the mode of delivery in which it teaches the most students and no negative flags in either mode of delivery or split metrics confers a rating of Gold.

No flags, one, two, three or four positive flags in the core metrics for the mode of delivery in which it teaches the most students and no negative flags in either mode of delivery or split metrics confers a rating of Silver.

Any negative flags in either mode of delivery for any core or split metric confers a rating of Bronze.

If your own assessment of your score is that you pass the threshold for satisfactory quality, but that you have too many poor scores, then why would you put too much effort into the written submission – you’re going to get a bronze award anyway.

And I still don’t think that’s how medals are awarded.

Learning and Teaching Conference 2016

Last week we held our annual learning and teaching conference, around the theme of Digital Capability. What a success!

We had more attendees than ever before, and a real buzz around the building, as people moved between keynote lectures, the parallel breakout sessions and the fringe stands.

The day started off with a welcome from our VC, Prof Liz Barnes before our first keynote speaker, Helen Beetham introduced the subject for the day with her talk on “Digital Capability, Beyond Digital Capability”

28My key takeaway from the talk was on the need to develop digital capability to provide a “capacity to thrive” and that students will awlays want what we can uniquely offer, namely:

  • Learning relationships
  • Sense of belonging
  • Security – walled gardens with pathways out
  • Credibility
  • Distinctiveness
  • Specialism
  • Reputation

Following this, our first breakout sessions looked at: Innovative use of technology to enhance teaching; Digital support for student learning; Digital insights to improve learning, and Digital identity and capability. These provided a chance for staff from across the institution to showcase their work, and prompt discussion of how digital tools can be used to improve how we deliver our courses.

Explorw-2016-001

Explorw-2016-054

Lunchtime saw us run a fringe event for the first time – a chance to talk to university suppliers such as BlackBoard, lynda.com, and to some of our support teams.

Explorw-2016-038 Explorw-2016-035 Explorw-2016-032

The keynote after lunch was by Steve Wheeler, on Learning in the Age of Remix

Explorw-2016-048

Steve challenged us about how digital tools change the way we teach, how physical and digital spaces are blurring in a hyper-connected world, how technology is not a silver bullet – it should be used wisely or not at all, and most importanty reminded us th it’s a fabulous time to be an academic.

Following further breakout sessions, we returned for final plenary and Q&A session

97

So what next?

Some great feedback has already appeared across social media using the hashtag #StaffsLT16

If you attended, you’ll be asked to complete a qualtrics survey

All of the presentations are now available on our conference blog here. Videos of the keynote presentations will be available as soon they have gone through post-production and editing

All attendees were asked to fill in a pledge card asking, what will you do differently? We’ll be sending these back to you in due course as a reminder, and also so that we can provide the development and support you need.

We’re already looking at what we can do to improve the conference expereicne further,  and how we start to  plan next year’s event.

We’ll be building our digital strategy to help all our staff and students get the most out of the technology we have to make us the Connected University – this conference was just the starting point – it’s going to be an exciting year!

 

Guardian University Guide 2017

The second big university league table of the year, the Guardian University Guide, was published today, one which the compilers say is the most student friendly,as it focuses on subject level scores in more detail, and measures things that are of importance to students. In other words, research is not a part of the table.

“The methodology focuses on subject-level league tables, ranking institutions that provide each subject area, according to their relevant statistics.

To ensure that all comparisons are as valid as possible, we ask each institution which of their students should be counted in which subject so that they will only be compared to students taking similar subjects at other universities.

Eight statistical measures are employed to approximate a university’s performance in teaching each subject. Measures relate to both input – for example, expenditure by the university on its students – and output – for example, the probability of a graduate finding a graduate-level job. The measures are knitted together to get a Guardian score, against which institutions are ranked.

A lot of emphasis is given to student experience, through the outcomes of the National Student Survey, and entry grades are dealt with twice – firstly in the details of entry tariff, and secondly in the measure of “value added”, which is an assessment of good degrees, but related to the entry grades of individual students.

The top 4 places are unchanged – Cambridge, Oxford, St Andrews and Surrey. The entrant into the top 5 is Loughborough.

The big winners this year are: Manchester Met, Northumbria City, Bradford, Anglia Ruskin, Derby, Liverpool Hope, Sunderland.

While going down are:Liverpool John Moores, Queen Margaret, Brunel, Brighton, Cumbria ,Birmingham City.

Staffordshire University have pleasingly gone up 14 places to 69th.

guardian2017