«The importance of coursework Skills and abilities such as practical and field work, research, independent study, extended writing and proof-reading ...»
Radical solutions in demanding times: alternative
approaches for appropriate placing of ‘coursework
components’ in GCSE examinations
Throughout this paper, we shall use the term ‘coursework’ to describe candidate work
undertaken in the course of a learning programme, which typically has been deemed
important enough to be assessed and contributes to the award of the qualification. Usually
internally assessed and subject to external moderation, coursework is currently assigned an important function in ensuring that learning programmes cover key outcomes relating to skills and competences and certain elements of knowledge and understanding, all of which are deemed essential to broad-based achievement. This includes experimental techniques (science); fieldwork (geography); and creative production (art).
The importance of coursework Skills and abilities such as practical and field work, research, independent study, extended writing and proof-reading are an integral part of a broad-based education. They are also skills strongly demanded by higher education and employers.
So it is vital that these elements (typically associated with coursework assessment in the past) remain important outcomes of the education system.
This paper sets out a radical approach that will ensure vital elements of coursework are locked into the education system, but are not adversely affected by assessment and accountability arrangements.
Background and context: problems of the current system The history of coursework shows marked changes of function and form. During the 1970s, 1980s and early 1990s, ‘coursework’ could mean, in the same subject across a group of children in a class, that different things which ‘represented the best work of the pupil’ could be included – a report for one, an artefact for another etc. The driver for this was validity, not absolute comparability of outcomes and reliability in marking. Increased ‘standardisation’ of coursework and refinements in awarding body systems did attempt greater control of the measurement, through control of the tasks and the centre-level marking. 100% coursework GCSEs emerged shortly after the introduction of GCSE in 1988, and by the early 1990s ‘about two-thirds of 16 year olds were taking GCSE English through syllabuses which had no examinations – they were 100% coursework. Following a change to the (national) subject criteria, coursework was reduced to 40%’ (QCA 2006). In 2012, with coursework contributing up to 60% of the award, issues around the tolerances in coursework marking were implicated in the problematic results in English GCSE from AQA and Edexcel.
However, even before development of hyper-accountability during the late 1990s, parental support and teacher guidance and support were driving disparities between the outcomes of external assessment and internal assessment (compression of range and apparent leniency). Plagiarism and undue support (e.g. multiple reworking of work after teacher advice) became a greater concern, resulting in the State developing ‘controlled assessment’.
1|Page Evaluations by Cambridge Assessment and others1 suggest that controlled assessment is cumbersome, disliked by many teachers and is loathed in some subjects (e.g. physics).
What is more, the higher education sector in the UK has complained that that certain key outcomes such as research skills and independent learning imputed to coursework are not actually attained by pupils taking controlled assessment, suggesting that – as a particular manifestation of coursework – it has failed to achieve many of its objectives.
In the past couple of years coursework increasingly has been associated with highly instrumental behaviours, with schools using it to optimise students’ grades and to reduce peaks in assessment load by ‘getting some of the assessment out of the way early’. The issues around GCSE English 2012, which resulted in a judicial review and a select committee enquiry, provide ample evidence to support this.
Moreover, statistical review of coursework shows problems in the marking of coursework – which is carried out internally, with a tendency towards bunched marks (poor differentiation of candidates), and upward-tilted marking in comparison to examination results (potential leniency).
Drivers in the current context put intolerable pressure on teachers, pulling them in very different directions. On the one hand their performance must continually improve, and on the other they must be impartial and reliable assessors. This leads to a highly-conflicted professional role regarding internal assessment. External accountability measures exert very high pressures for continual improvement and attaining grades at and above the C threshold at GCSE. Many elements such as professional recognition, status and progression are contingent on performance against targets and measures. For the majority of teachers this does not lead them to maladministration of assessment, but it does appear to drive bunching and upwards tilting of marking, and may include a strong element of ‘benefit of the doubt’. At the same time, awarding bodies expect teachers to behave as consistent, fair markers, ensuring that each standards and marking practice are in line with marking schemes and national standards.
Exam boards also are conflicted. They design qualifications to national criteria, some of which lead to highly-compromised qualification structures. Indeed, the judicial review into GCSE English in 2012 – which had a 60% coursework component – cited poor design criteria, emanating from the State, as a principal contributing factor to the issues surrounding the award. Exam boards are under pressure from subject organisations and teachers to include coursework, but at the same time have to ensure dependability, something which is hugely costly (contradicting pressures to hold or reduce fee levels) and perceived as draconian and external by schools (producing tensions around locus of control).
We would assert that, in the current context of drivers and incentives in arrangements (a condition of hyper-accountability on all parties), that coursework assessment exerts unmanageable contradictions on teachers, and different but equally unmanageable contradictions on awarding bodies.
Yet the skills, understanding and knowledge associated with coursework remain desirable educationally. This paper examines alternative approaches to the conundrum around coursework assessment, and recommends a radical model (Model 4) which enhances both learning programmes and qualifications. The paper also outlines a fifth model, which holds considerable promise in securing the educational aims associated with coursework assessment, but which requires a more extended development period. This last model is Ofqual (2011), Evaluation of the introduction of controlled assessment Ofqual SLN Geography Forum 2011
On assessment-driven education, Cambridge Assessment has argued elsewhere that a dissonance has entered educational thinking and practice in England – ‘these things are educationally highly desirable, but if they’re not in the assessment, then we won’t teach them…’. Model 4 attempts not only to deliver dependable assessment, but to erode this dysfunctional thinking.
Current Models of Coursework Since the introduction of coursework in the 1970s, the state has used three ‘dimensions of control’ as a means of ensuring coursework meets the required the required standards and level of dependability.
Dimension 1 - Controlling structure of assessment Tightly defined tasks etc – reduces validity; increases ability to anticipate outcomes and ‘learn to the test’; felt, by teachers, to be undue restriction. But should be seen as a dimension – from loosely structured to highly structured Dimension 2 Controlling the conditions of administration Dimension 3 Use of moderation models Dependence on internal processes versus external processes - this includes remote statistical moderation, sampling of work, moderation visits etc.
I am assuming that construct definition (assessment objectives), mark schemes, etc are optimum and are always of high quality. We should not use management of the dimensions above to offset low quality in these aspects of the assessment.
So, taking these dimensions, we present two principal models of coursework that have been
used to this date:
Model 1 Low control of structure, low control of administration, and variable approaches to external moderation This is a recipe for serious problems in conditions of high external accountability. This was present in the late 1980s and 90s as coursework increased across the system.
Model 2 High control of structure, high control of administration, low externality (statistical moderation) This is the ‘controlled assessment’ model and was the system which contributed to the issues around GCSE English in 2012. Discussions across subjects and with a range of schools suggests that controlled assessment is widely disliked for both logistical and educational reasons, and provides poor insurance against malpractice.
What is needed Given the problems of previous models of coursework, this section will discuss in more detail the context and how they can be addressed.
Secure evidence regarding key elements of knowledge and performance – validity issues 3|Page Far more attention needs to be paid to subject-specific research on the genuine elements of performance which are required as outcomes of the learning programme and are essential preparation for higher education. In determining necessary educational outcomes there should be no reliance on assumptions, there should be use of evidence. We can refer to this as ‘construct elicitation’ – a vital phase of assessment design – which involves determining with confidence the things which should be assessed and by what means.
Cambridge Assessment has had severe misgivings about certain design decisions regarding the proportion of coursework assessment, and coursework assessment objectives, which have flowed from national decisions regarding qualifications. This concern was reinforced by the judicial review surrounding GCSE English in 2012 which highlighted QCA-driven design decisions as a contributing factor in serious problems with the awards.
Greater use of secure evidence in determining the things which should be a focus of the qualification, and which of these should be assessed by what means, is an absolute priority.
From this emerges a recommendation regarding the way Cambridge exam boards develop each qualification and what we say to others (e.g. external subject experts, learned societies) regarding the quality of advice expected from them. For example: is understanding the need for accuracy in measurement highly predictive of being able to measure accurately? Research tells us that it is2 3.
Fotheringham’s work4 suggests that learning to measure is easier than learning the need for accuracy, but that the latter predicts sustained professional competence in the former. You can encourage learning the need for accuracy through practical work in the course, but the assessment should best focus on whether the person understands the need for accuracy and the degree of accuracy in a range of settings. This can be tested in an examination.
Furthermore, there are broader hard trade-offs in qualifications, such as whether understanding entropy (and assessing this) is more important than developing practical skills in experimentation (and assessing these).
We have to be far more evidence-based regarding the desired outcomes and the utility of coursework.
In securing this, important questions must be addressed:
do we want general study and research skills – critical thinking – etc – and why?
do we desire them for HE, for employment, for social goods, for individual goods?
In addressing the utility of coursework in respect to GCSE and GCE the question goes beyond (the undoubtedly important) ‘HE needs this’. Indeed, the HE sector needs to be clear what skills and knowledge it requires learners to have prior to entry onto HE courses, or whether it prefers to recruit people who are well disposed to learning these things in HE (not the same thing at all). An example of an area of HE that has been successful at this is medical education. Medical schools have looked at the respective role of selection criteria and the content of learning programmes in areas such as communication skills, professional values, and financial management.
Eraut M (1994) Developing professional knowledge and competence London: Falmer Press Oates T (1992) “Core skills and transfer: aiming high” Innovations in education and training international vol29, no.3 p227-239 Fotheringham J (1984) Transfer of training: field investigation in Youth Training Journal of Occupational Psychological
Skills & knowledge requirements & ‘Validity’ We need to drive much greater refinement into the discourse regarding ‘we need these skills (proposition 1) and coursework assessment is the way to deliver this (proposition 2)’. These should be treated as two separate stages of the argument regarding qualification design – 2 does not naturally follow from 1.
On proposition 1, HE consultations on GCE English have thrown up the need to include:
1 encouraging wider reading 2 research skills ‘recreative’ skills 4 independence 5 proofreading skills
It is instructive to take each of these in turn and subject them to ‘alternatives to coursework’:
Encouraging wider reading This is a desirable general good emerging from the course of study. We should not use assessment alone to drive this – although it can have a role, but not exclusively through coursework assessment. In the past, written timed examinations were used to encourage this. Unseen traditional papers could be asked on any aspect of a range of set books. This encouraged all books to be covered in depth. The ‘transparency and fairness’ discourse ruled this out.