Early in January the next round of National Student Survey begins across the UK HE sector. This year for many will be seen as a dry run for what is to come in later years – the widely discussed Green Paper which refers to using metrics to help gauge teaching excellence. Once we get past the first year of everyone being equally excellent, then NSS and DLHE are widely anticipated to be key measures to be used, as well as possible measures of learning gain, since the paper does hint at a lack of satisfaction with current degree classification processes.
So before we check on progress on last year’s action plans, and start to think about how we introduce this year’s survey to our students, a couple of publications from the last week are worth bearing in mind.
Firstly a research paper from QAA, written in part by Jo Williams of University of Kent, to which Staffordshire University made a contribution. In this, “The Role of Student Satisfaction Data in Quality Assurance and Enhancement: How Providers Use Data to Improve the Student Experience” Dr Williams looks at how different types of institutions approach NSS, and shows that across all parts of the sector, institutions and senior staff questions NSS:
In particular, the issue of question 22 of the NSS, asking students about their overall satisfaction has created endless debates in academia, if not confusion, with professionals arguing that it is methodologically and professionally wrong to base league tables on a single question which is not in itself clear. Various senior academics we spoke to concurred with this theme.
The research did show that universities in all parts of the sector listen to what students say, and that they do make changes based on what the survey reveals, for instance:
The programmes change every year so sometimes it’s because the subject changes but very often it’s because students have expressed discontent with something. Therefore, you change the personnel teaching it. You change the way you teach it. You change the content, you change the assessment, you change the feedback, you change something about it. Or sometimes you just drop it.
(University B)
Changes in practice were noted across institutions:
Our data revealed that institutions have employed various changes in response to issues students raise in the satisfaction surveys. Among other practical changes, universities have:
– recruited academic advisors and officers to take charge of the NSS
– mapped internal surveys to mirror the NSS
– renewed their focus on learning and teaching revisited and improved timetabling systems
– raised structures including building sites, teaching rooms and sports complexes
– revisited their feedback and assessment mechanisms
– organised briefings with students to enlighten them about feedback and assessment, the NSS and its benefits
– replaced subjects, at times personnel, whose NSS scores keep falling
– introduced or empowered various forums and platforms to meet several times in a year to discuss the NSS, among others. Such forums found at nearly all institutions include: NSS forums, student experience action plans, education boards, NSS improvement forums and learning and teaching advisory groups
Across the institutions in the research, other similarities were see in how data was used: for instance comparing scores across schools, holding low scoring schools to account, and comparing with other institutions.
In terns of league tables, depending on where you appear in a league table appears to influence the behaviour of the organisation.
In particular, institutions placed in the top 25% of the league tables appear to have a relaxed view of the NSS. They appear to put particular emphasis on improving the student experience and argue that this automatically triggers a higher satisfaction rate than being ‘obsessed with the NSS’ and improving league table position:
Whereas at the other end of the scale:
In contrast, institutions in the lower 25% of the student satisfaction league tables appear to place particular focus on improving their student satisfaction and subsequently their standings in the league tables.
The main conclusion of the work then is that:
In particular, institutions in the top 25% of league tables
(Universities A and B) appear to prioritise improving the student experience and let the NSS take care of itself, while those in the bottom 25% (Universities C and D) prioritise their NSS league table position and subsequently employ various tactics to promote the surveys.
Despite institutions adopting different approaches to the surveys based on league table positions, institutions generally listen to students’ demands raised in surveys and have responded by instigating various changes including recruiting academic advisers and officers to take charge of the NSS; mapping internal surveys to mirror the NSS; raising structures including building sites and revising their feedback and assessment processes.
What the paper doesn’t consider is the relative ranking of NSS scores by institutions – it is perfectly possible to score well on certain NSS scores, and appear to out perform other institutions on such a single measure, but this may not change institutional behavours which may be set to focus on the NSS position, rather than overall experience.
In other work out recently, from Stephen Gibbons, Eric Neumayer and Richard Perkins writing in the Economics of Education Review “Student satisfaction,league tables and university applications: Evidence from Britain” (S. Gibbons et al./Economics of Education Review 48 (2015)148–164), the authors make the following points:
- NSS scores have an impact on recruitment applications, but not huge
- students do not appear to respond directly to quality cues from satisfaction scores
- students may already have a well developed knowledge about product quality based on perceptions of reputation and prestige
- student satisfaction and league table position do not have a short term effect on market demand
- the degree to which quality indicators affect demand is strongly linked to the amount of market competition for a given subject
Finally, in “Applying Models to National Surveys of Undergraduate Science Students: What Affects Ratings of Satisfaction?” (Educ. Sci. 2013, 3(2), 193-207) by Langan Dunleavey and Fielding of Manchester Metropolitan University, the authors look at what influences the results seen for question 22 – overall satisfaction. We are all familiar with reading through a set of results, with great scores for most of the questions, and sections, but a lower score for this final crucial question, which is the one used in all league tables.
The authors note the year on year consistency of results for individual subjects, noting how comparisons should be made:
Subjects were highly consistent year on year in terms of their relative performance in the satisfaction survey. This has implications for institutional decision-making particularly if subjects are wrongly compared against institutional averages, when comparisons should be made within subject areas (e.g., comparing with national subject averages, although this may be subject to error if courses contain different compositions of learners, for example in terms of ethnicity)
This is consistent with HEFCE advice, and why as an institution we provide sector average scores at JACS3 subject level for comparison.
Interestingly, questions about feedback were the weakest predictors of “Overall Satisfaction” whereas:
The best predictor of student satisfaction nationally for the three years analysed was “The course was well designed and running smoothly” followed by ratings of “Teaching”, “Organisation” and “Support”. This may vary slightly between subjects/institutions, so it is proposed that this type of quantitative approach to contextualising survey metrics can be used to guide institutions in resource allocation to tackle student experience challenges.
So our conclusions on how we approach the next NSS, and perhaps more importantly NSS2017 could be:
- carry on listening to students, responding and being seen to respond to surveys
- make sure we focus on all the measures that make up a league table
- make sure that courses are well organised and running smoothly
- don’t expect league table moves to immediately be reflected in increased applications
- and remember – the student experience is what really matters, not the survey itself.