Implications of ‘Dimensions of quality’ in a market environment

Graham Gibbs has published, through HEA, a further paper on Dimensions of Quality, now considering the market environment in which we supposedly operate and some of the implications, which are reported below, together with some of my comments. The original paper can be accessed here.

1 This report concerns the practical implications of the use of
performance indicators for the way institutions are currently attempting
to attract students, improve quality, improve ‘value for money’, and
improve their relative standing in relation to educational provision.
Institutions are responding to this data-driven market in a variety of
ways, some of them perhaps unexpected and some with probably
negative consequences. The report suggests ways in which the use of
data in a market could be tuned up to have more positive effects.


2 The conclusions of the report are based on:
• examination of the data currently available to students and used by
institutions, and their validity and usefulness;
• literature about performance indicators in higher education, and also
literature about the effect that performance indicators and markets
have on the behaviour of organisations in any public sector, such as
schools and hospitals;
• meetings with those senior managers responsible for educational
quality within institutions, both in national gatherings and through
interviews within 12 institutions of a wide variety of types;
• examination of institutional documentation, for example about how
quality data are reported and used internally, and institutional responses
to the Browne Report.


3 It is not yet clear whether institutional attempts to improve National
Student Survey (NSS) scores and other quality indicators is having any
effect on student recruitment, let alone on learning gains. To a large
extent the market is perceived to be driven by reputation, just as in the
past. US research shows that reputation tells you almost nothing about
educational quality, use of effective educational practices, or learning
gains, but merely reflects research performance, resources and fee levels.
It is uncertain whether the use of more valid indicators of educational
quality will gradually change perceptions of what reputation is about, and
turn it into a more useful guide to student choice.

 

4 Data currently provided to potential students, such as Key Information
Sets (KIS), and used by institutions to make decisions, include some
valid indicators of educational quality and also include variables that are
invalid or difficult to interpret. There is scope to improve the value of the
information provided to students, and used by institutions, by changing
some of the variables and collecting and collating somewhat different data.
In particular it is not yet possible for students to see what educational
provision their fees will purchase (such as class size, which predicts learning
gains) other than the proportion of class contact hours (which does not
predict learning gains).

Is this an area of opportunity for individual institutions, to make the most of information that they hold regarding class sizes, and who is doing the actual teaching?
5 The aspects of educational provision that institutions pay attention to in
their internal quality assurance processes often overlook crucial indicators.
Any new quality regime should ensure that it focuses on the right variables,
and the use of valid quality indicators in KIS and elsewhere would help to
lever appropriate attention.


6 Regardless of the validity of currently available data, institutional behaviour
is being driven by data to an unprecedented extent. In most institutions
there is now an annual cycle of analysis of performance indicators at both
institutional and departmental level, followed by planning to improve them,
again at both institutional and departmental level. Departments are much
more aware of how their competitors at other institutions perform, in
relation to the main indicators. In some cases this annual analysis of data
has in effect taken over from periodic review and QAA audit as the main
driver of quality assurance and enhancement (and without this having
been planned or agreed). Any future revision of national quality assurance
mechanisms, and requirements on institutions, will need to take this reality
into account.

We have the opportunity to embed our own portfolio performance review tool into our annual processes, while at the same time reviewing how we carry out internal annual monitoring as well as responses to student surveys. We might be able to do more if we join all of these activities together, rather than seeing them as separate and distinct.
7 Most currently available data are about degree programmes, and students
apply to study degree programmes. In contrast much quality assurance,
and course design and documentation, has focused on individual modules.
In modular course structures the collection of modules that students
experience may relate loosely to the unit of analysis of the NSS. This
confronts modular institutions and modular degree programmes with major
problems in interpreting and acting on the degree-programme-level data
from the NSS. A consequence is that some institutions are greatly reducing
the number of combined Honours degrees offered and moving away from
modularity back to traditional single subject degree programmes with greater
alignment of student experience with the unit of analysis, and labelling, of
public indicators of quality. There are consequences of this shift for the
diversity of curricula and for student choice, which may have negative impacts.

This is an interesting assertion. Many institutions have moved away from joint programmes for other reasons: the confused market offer; inefficiencies in delivery and poor student experience where there can be a lack of belonging . However, it is true that as we reduce portfolio, we do potentially reduce the diversity of curricula within remaining awards. The difficulty of associating awards with published NSS results is well known – more work can be done here to make sure we understand which awards are categorised where (and why!), so that sensible interpretation of published data can be undertaken, and the right changes made.

 

8 There has been a considerable emphasis over the past decade on
training and accrediting individual teachers, rewarding individual teachers,
and on funding local innovation in teaching. There is a marked lack of
corresponding institutional emphasis on the effective operation of
‘programme teams’ (all those who contribute to the teaching of a degree
programme), on developing leadership of teaching, and on curriculum
design and assessment at programme level. A change of focus of national
and institutional enhancement efforts is overdue. Institutional career
structures still need to be developed that reward leadership of teaching,
rather than only individual research and individual teaching. Funding for
innovation, both within institutions and by national bodies, should be
targeted on programmes rather than on modules and on the involvement
of entire programme teams rather than on individuals.

Do we know enough about teams? For instance  all new staff are required to complete a PGCHPE, But what are we doing about experienced staff, and those non-teaching staff who are associated with the programme?
9 Many institutions are using data to identify a previously overlooked quality
problem and address it: the most common example is poor and slow
feedback to students on their assignments. Institutions are then making
very broad scale changes that affect all degree programmes and all
teachers in order to address these problems. Data are successfully driving
change and in some cases there is clear evidence of improvements in NSS
scores as a consequence of the institution-wide change. Some centrally
determined changes will limit teachers’ scope to enhance teaching in
contextually sensitive ways, and will make things worse.

I think we all recognise that a “one size fits all” approach does not always work. However  we have to ensure that when we do identify changes to be implemented across the insititution, that we identify when exceptions are valid, and equally when they are not!


10 An increasing number of institutions are using data to track progress
in emphasising the ‘institutional USP’. They are marketing themselves as
distinctive in relation to a particular indicator, such as employability, and
emphasising that variable in programme-level learning outcomes and in
institution-wide quality enhancement efforts, and then collecting better
data than are currently available in order to monitor progress.


11 In light of the prominence given to overall student satisfaction data in
KIS and league tables, it is not surprising that institutions are addressing
‘satisfaction’ issues with vigour. This may be less to do with teaching than
with consistently high standards of service delivery. In some cases these
two domains of quality overlap, as with policies and practices concerning
assignment turnaround times. Many institutions have a range of initiatives
designed to improve service delivery, using NSS data to target efforts.

Yep – we’ve all got those! But the really interesting thing about consistent times for feeding back on assignments is this – even when we know we meet our targets, even when we tell our students what we are going to do, we still receive poor results for feedback! There is still a perception gap around what we mean by feedback, and what we consider to be timely, and what our students think.

 

12 While there is a sense in which students are being treated as consumers
of a product, institutions with good and improving NSS scores often have
initiatives that engage students as co-producers of knowledge, or partners
in an educational enterprise. Attempts to improve student engagement are
taking many forms and sometimes involve students having responsibility
for administering and interpreting student feedback questionnaires, and
acting as change agents, and also central support for activities run by the
students’ union that relate to educational provision. It is unclear the extent
to which NSS scores for a programme reflect extra-curricular initiatives of
this kind, but some institutions are behaving as if they are important.


13 One focus of attention of the interviews undertaken for this report was
whether institutions are focusing on ‘value for money’ by paying renewed
attention to using cost-effective teaching methods in order to deliver
a good quality of education given the level of fees and other income.
There seems to be plenty of evidence of a squeeze on resources, and
adoption of practices that save money, but not of an equivalent focus on
using more effective methods. There is a need for a national initiative on
cost-effective teaching so that, where reduced resources force changes
to teaching practices, it might be possible to maintain or even to improve
student learning.


14 Some of the institutions that are charging the lowest fees are suffering
from competing demands to maintain or enhance their research efforts
in order to retain research degree awarding powers. Attempts to improve
teaching quality in such contexts face challenging conflicts of interest.